00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 93 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3271 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.049 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.117 > git --version # 'git version 2.39.2' 00:00:00.117 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.152 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.153 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.564 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.575 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.587 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.587 > git config core.sparsecheckout # timeout=10 00:00:02.597 > git read-tree -mu HEAD # timeout=10 00:00:02.615 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.636 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.636 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.738 [Pipeline] Start of Pipeline 00:00:02.751 [Pipeline] library 00:00:02.752 Loading library shm_lib@master 00:00:02.753 Library shm_lib@master is cached. Copying from home. 00:00:02.771 [Pipeline] node 00:00:02.778 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.781 [Pipeline] { 00:00:02.790 [Pipeline] catchError 00:00:02.791 [Pipeline] { 00:00:02.805 [Pipeline] wrap 00:00:02.813 [Pipeline] { 00:00:02.820 [Pipeline] stage 00:00:02.822 [Pipeline] { (Prologue) 00:00:03.023 [Pipeline] sh 00:00:03.306 + logger -p user.info -t JENKINS-CI 00:00:03.321 [Pipeline] echo 00:00:03.323 Node: GP2 00:00:03.329 [Pipeline] sh 00:00:03.622 [Pipeline] setCustomBuildProperty 00:00:03.633 [Pipeline] echo 00:00:03.635 Cleanup processes 00:00:03.639 [Pipeline] sh 00:00:03.921 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.922 1060325 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.934 [Pipeline] sh 00:00:04.219 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.219 ++ grep -v 'sudo pgrep' 00:00:04.219 ++ awk '{print $1}' 00:00:04.219 + sudo kill -9 00:00:04.219 + true 00:00:04.232 [Pipeline] cleanWs 00:00:04.240 [WS-CLEANUP] Deleting project workspace... 00:00:04.240 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.245 [WS-CLEANUP] done 00:00:04.248 [Pipeline] setCustomBuildProperty 00:00:04.259 [Pipeline] sh 00:00:04.537 + sudo git config --global --replace-all safe.directory '*' 00:00:04.656 [Pipeline] httpRequest 00:00:04.684 [Pipeline] echo 00:00:04.686 Sorcerer 10.211.164.101 is alive 00:00:04.691 [Pipeline] httpRequest 00:00:04.696 HttpMethod: GET 00:00:04.696 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.697 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.700 Response Code: HTTP/1.1 200 OK 00:00:04.700 Success: Status code 200 is in the accepted range: 200,404 00:00:04.700 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.555 [Pipeline] sh 00:00:05.836 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.850 [Pipeline] httpRequest 00:00:05.899 [Pipeline] echo 00:00:05.901 Sorcerer 10.211.164.101 is alive 00:00:05.908 [Pipeline] httpRequest 00:00:05.912 HttpMethod: GET 00:00:05.913 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:05.913 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:05.924 Response Code: HTTP/1.1 200 OK 00:00:05.924 Success: Status code 200 is in the accepted range: 200,404 00:00:05.925 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:36.849 [Pipeline] sh 00:00:37.130 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:40.420 [Pipeline] sh 00:00:40.701 + git -C spdk log --oneline -n5 00:00:40.701 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:40.701 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:40.701 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:40.701 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:40.701 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:40.717 [Pipeline] withCredentials 00:00:40.728 > git --version # timeout=10 00:00:40.739 > git --version # 'git version 2.39.2' 00:00:40.753 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:40.754 [Pipeline] { 00:00:40.761 [Pipeline] retry 00:00:40.762 [Pipeline] { 00:00:40.775 [Pipeline] sh 00:00:41.055 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:42.004 [Pipeline] } 00:00:42.028 [Pipeline] // retry 00:00:42.033 [Pipeline] } 00:00:42.056 [Pipeline] // withCredentials 00:00:42.068 [Pipeline] httpRequest 00:00:42.094 [Pipeline] echo 00:00:42.096 Sorcerer 10.211.164.101 is alive 00:00:42.105 [Pipeline] httpRequest 00:00:42.110 HttpMethod: GET 00:00:42.111 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:42.111 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:42.127 Response Code: HTTP/1.1 200 OK 00:00:42.128 Success: Status code 200 is in the accepted range: 200,404 00:00:42.128 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:55.568 [Pipeline] sh 00:00:55.854 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.770 [Pipeline] sh 00:00:58.057 + git -C dpdk log --oneline -n5 00:00:58.057 caf0f5d395 version: 22.11.4 00:00:58.057 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:58.057 dc9c799c7d vhost: fix missing spinlock unlock 00:00:58.057 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:58.057 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:58.068 [Pipeline] } 00:00:58.086 [Pipeline] // stage 00:00:58.095 [Pipeline] stage 00:00:58.098 [Pipeline] { (Prepare) 00:00:58.121 [Pipeline] writeFile 00:00:58.174 [Pipeline] sh 00:00:58.482 + logger -p user.info -t JENKINS-CI 00:00:58.500 [Pipeline] sh 00:00:58.785 + logger -p user.info -t JENKINS-CI 00:00:58.799 [Pipeline] sh 00:00:59.079 + cat autorun-spdk.conf 00:00:59.079 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.079 SPDK_TEST_NVMF=1 00:00:59.079 SPDK_TEST_NVME_CLI=1 00:00:59.079 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.079 SPDK_TEST_NVMF_NICS=e810 00:00:59.079 SPDK_TEST_VFIOUSER=1 00:00:59.079 SPDK_RUN_UBSAN=1 00:00:59.079 NET_TYPE=phy 00:00:59.079 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:59.079 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.085 RUN_NIGHTLY=1 00:00:59.093 [Pipeline] readFile 00:00:59.119 [Pipeline] withEnv 00:00:59.121 [Pipeline] { 00:00:59.134 [Pipeline] sh 00:00:59.418 + set -ex 00:00:59.418 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.418 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.418 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.418 ++ SPDK_TEST_NVMF=1 00:00:59.418 ++ SPDK_TEST_NVME_CLI=1 00:00:59.418 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.418 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.418 ++ SPDK_TEST_VFIOUSER=1 00:00:59.418 ++ SPDK_RUN_UBSAN=1 00:00:59.418 ++ NET_TYPE=phy 00:00:59.418 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:59.418 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.418 ++ RUN_NIGHTLY=1 00:00:59.418 + case $SPDK_TEST_NVMF_NICS in 00:00:59.418 + DRIVERS=ice 00:00:59.418 + [[ tcp == \r\d\m\a ]] 00:00:59.418 + [[ -n ice ]] 00:00:59.418 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.418 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.418 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.418 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.418 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.418 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.418 + true 00:00:59.418 + for D in $DRIVERS 00:00:59.418 + sudo modprobe ice 00:00:59.418 + exit 0 00:00:59.430 [Pipeline] } 00:00:59.449 [Pipeline] // withEnv 00:00:59.455 [Pipeline] } 00:00:59.473 [Pipeline] // stage 00:00:59.484 [Pipeline] catchError 00:00:59.486 [Pipeline] { 00:00:59.502 [Pipeline] timeout 00:00:59.502 Timeout set to expire in 50 min 00:00:59.505 [Pipeline] { 00:00:59.520 [Pipeline] stage 00:00:59.522 [Pipeline] { (Tests) 00:00:59.537 [Pipeline] sh 00:00:59.821 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.821 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.821 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.821 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.821 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.821 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.821 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.821 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.821 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.821 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.821 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.821 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.821 + source /etc/os-release 00:00:59.821 ++ NAME='Fedora Linux' 00:00:59.821 ++ VERSION='38 (Cloud Edition)' 00:00:59.821 ++ ID=fedora 00:00:59.821 ++ VERSION_ID=38 00:00:59.821 ++ VERSION_CODENAME= 00:00:59.821 ++ PLATFORM_ID=platform:f38 00:00:59.821 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:59.821 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.821 ++ LOGO=fedora-logo-icon 00:00:59.821 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:59.821 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.821 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:59.821 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.821 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.821 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.821 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:59.821 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.821 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:59.821 ++ SUPPORT_END=2024-05-14 00:00:59.821 ++ VARIANT='Cloud Edition' 00:00:59.821 ++ VARIANT_ID=cloud 00:00:59.821 + uname -a 00:00:59.821 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:59.821 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.754 Hugepages 00:01:00.754 node hugesize free / total 00:01:00.754 node0 1048576kB 0 / 0 00:01:00.754 node0 2048kB 0 / 0 00:01:00.754 node1 1048576kB 0 / 0 00:01:00.754 node1 2048kB 0 / 0 00:01:00.754 00:01:00.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.754 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:01:00.754 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:01:00.754 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:01:00.754 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:00.754 + rm -f /tmp/spdk-ld-path 00:01:00.754 + source autorun-spdk.conf 00:01:00.754 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.754 ++ SPDK_TEST_NVMF=1 00:01:00.754 ++ SPDK_TEST_NVME_CLI=1 00:01:00.754 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.754 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.754 ++ SPDK_TEST_VFIOUSER=1 00:01:00.754 ++ SPDK_RUN_UBSAN=1 00:01:00.754 ++ NET_TYPE=phy 00:01:00.754 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:00.754 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:00.754 ++ RUN_NIGHTLY=1 00:01:00.754 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.754 + [[ -n '' ]] 00:01:00.754 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.754 + for M in /var/spdk/build-*-manifest.txt 00:01:00.754 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.754 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.754 + for M in /var/spdk/build-*-manifest.txt 00:01:00.754 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.754 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.754 ++ uname 00:01:00.754 + [[ Linux == \L\i\n\u\x ]] 00:01:00.754 + sudo dmesg -T 00:01:00.754 + sudo dmesg --clear 00:01:01.013 + dmesg_pid=1060920 00:01:01.013 + [[ Fedora Linux == FreeBSD ]] 00:01:01.013 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.013 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.013 + sudo dmesg -Tw 00:01:01.013 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.013 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:01.013 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:01.013 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.013 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.013 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.013 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.013 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.013 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.013 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.013 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.013 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.013 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.013 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.013 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.013 Test configuration: 00:01:01.013 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.013 SPDK_TEST_NVMF=1 00:01:01.013 SPDK_TEST_NVME_CLI=1 00:01:01.013 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.013 SPDK_TEST_NVMF_NICS=e810 00:01:01.013 SPDK_TEST_VFIOUSER=1 00:01:01.013 SPDK_RUN_UBSAN=1 00:01:01.013 NET_TYPE=phy 00:01:01.013 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:01.013 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.013 RUN_NIGHTLY=1 23:44:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.013 23:44:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.013 23:44:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.013 23:44:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.013 23:44:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.013 23:44:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.013 23:44:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.013 23:44:35 -- paths/export.sh@5 -- $ export PATH 00:01:01.013 23:44:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.013 23:44:35 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.013 23:44:35 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:01.013 23:44:35 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721079875.XXXXXX 00:01:01.013 23:44:35 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721079875.hwhHsr 00:01:01.013 23:44:35 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:01.013 23:44:35 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:01.013 23:44:35 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.013 23:44:35 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:01.013 23:44:35 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.013 23:44:35 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.013 23:44:35 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:01.013 23:44:35 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:01.013 23:44:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.013 23:44:35 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:01.013 23:44:35 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:01.013 23:44:35 -- pm/common@17 -- $ local monitor 00:01:01.013 23:44:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.013 23:44:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.013 23:44:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.013 23:44:35 -- pm/common@21 -- $ date +%s 00:01:01.013 23:44:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.013 23:44:35 -- pm/common@21 -- $ date +%s 00:01:01.013 23:44:35 -- pm/common@25 -- $ sleep 1 00:01:01.013 23:44:35 -- pm/common@21 -- $ date +%s 00:01:01.013 23:44:35 -- pm/common@21 -- $ date +%s 00:01:01.013 23:44:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079875 00:01:01.013 23:44:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079875 00:01:01.013 23:44:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079875 00:01:01.013 23:44:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079875 00:01:01.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079875_collect-vmstat.pm.log 00:01:01.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079875_collect-cpu-load.pm.log 00:01:01.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079875_collect-cpu-temp.pm.log 00:01:01.013 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079875_collect-bmc-pm.bmc.pm.log 00:01:01.946 23:44:36 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:01.946 23:44:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.946 23:44:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.946 23:44:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.946 23:44:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.946 Mon Jul 15 09:44:36 PM UTC 2024 00:01:01.946 23:44:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.946 v24.05-13-g5fa2f5086 00:01:01.946 23:44:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:01.946 23:44:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.946 23:44:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.946 23:44:36 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:01.946 23:44:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:01.946 23:44:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.946 ************************************ 00:01:01.946 START TEST ubsan 00:01:01.946 ************************************ 00:01:01.946 23:44:36 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:01.946 using ubsan 00:01:01.946 00:01:01.946 real 0m0.000s 00:01:01.946 user 0m0.000s 00:01:01.946 sys 0m0.000s 00:01:01.946 23:44:36 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:01.946 23:44:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.946 ************************************ 00:01:01.946 END TEST ubsan 00:01:01.946 ************************************ 00:01:01.946 23:44:36 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:01.946 23:44:36 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:01.946 23:44:36 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:01.946 23:44:36 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:01.946 23:44:36 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:01.946 23:44:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.946 ************************************ 00:01:01.946 START TEST build_native_dpdk 00:01:01.946 ************************************ 00:01:01.946 23:44:36 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:01.946 caf0f5d395 version: 22.11.4 00:01:01.946 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:01.946 dc9c799c7d vhost: fix missing spinlock unlock 00:01:01.946 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:01.946 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:01.946 23:44:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:02.203 23:44:36 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:02.203 23:44:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:02.204 23:44:36 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:02.204 patching file config/rte_config.h 00:01:02.204 Hunk #1 succeeded at 60 (offset 1 line). 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:02.204 23:44:36 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:06.389 The Meson build system 00:01:06.389 Version: 1.3.1 00:01:06.389 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:06.389 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:06.389 Build type: native build 00:01:06.389 Program cat found: YES (/usr/bin/cat) 00:01:06.389 Project name: DPDK 00:01:06.389 Project version: 22.11.4 00:01:06.389 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.389 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:06.389 Host machine cpu family: x86_64 00:01:06.389 Host machine cpu: x86_64 00:01:06.389 Message: ## Building in Developer Mode ## 00:01:06.389 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.389 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:06.389 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.389 Program objdump found: YES (/usr/bin/objdump) 00:01:06.389 Program python3 found: YES (/usr/bin/python3) 00:01:06.389 Program cat found: YES (/usr/bin/cat) 00:01:06.389 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:06.389 Checking for size of "void *" : 8 00:01:06.389 Checking for size of "void *" : 8 (cached) 00:01:06.389 Library m found: YES 00:01:06.389 Library numa found: YES 00:01:06.389 Has header "numaif.h" : YES 00:01:06.389 Library fdt found: NO 00:01:06.389 Library execinfo found: NO 00:01:06.389 Has header "execinfo.h" : YES 00:01:06.389 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.389 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.389 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.389 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.389 Run-time dependency openssl found: YES 3.0.9 00:01:06.389 Run-time dependency libpcap found: YES 1.10.4 00:01:06.389 Has header "pcap.h" with dependency libpcap: YES 00:01:06.389 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.389 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.389 Compiler for C supports arguments -Wformat: YES 00:01:06.389 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.389 Compiler for C supports arguments -Wformat-security: NO 00:01:06.389 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.389 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.389 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.389 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.389 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.389 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.389 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.389 Compiler for C supports arguments -Wundef: YES 00:01:06.389 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.389 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.389 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.389 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.389 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.389 Compiler for C supports arguments -mavx512f: YES 00:01:06.389 Checking if "AVX512 checking" compiles: YES 00:01:06.389 Fetching value of define "__SSE4_2__" : 1 00:01:06.389 Fetching value of define "__AES__" : 1 00:01:06.389 Fetching value of define "__AVX__" : 1 00:01:06.389 Fetching value of define "__AVX2__" : (undefined) 00:01:06.389 Fetching value of define "__AVX512BW__" : (undefined) 00:01:06.389 Fetching value of define "__AVX512CD__" : (undefined) 00:01:06.389 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:06.390 Fetching value of define "__AVX512F__" : (undefined) 00:01:06.390 Fetching value of define "__AVX512VL__" : (undefined) 00:01:06.390 Fetching value of define "__PCLMUL__" : 1 00:01:06.390 Fetching value of define "__RDRND__" : (undefined) 00:01:06.390 Fetching value of define "__RDSEED__" : (undefined) 00:01:06.390 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.390 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.390 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.390 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.390 Checking for function "getentropy" : YES 00:01:06.390 Message: lib/eal: Defining dependency "eal" 00:01:06.390 Message: lib/ring: Defining dependency "ring" 00:01:06.390 Message: lib/rcu: Defining dependency "rcu" 00:01:06.390 Message: lib/mempool: Defining dependency "mempool" 00:01:06.390 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.390 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.390 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.390 Compiler for C supports arguments -mpclmul: YES 00:01:06.390 Compiler for C supports arguments -maes: YES 00:01:06.390 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.390 Compiler for C supports arguments -mavx512bw: YES 00:01:06.390 Compiler for C supports arguments -mavx512dq: YES 00:01:06.390 Compiler for C supports arguments -mavx512vl: YES 00:01:06.390 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.390 Compiler for C supports arguments -mavx2: YES 00:01:06.390 Compiler for C supports arguments -mavx: YES 00:01:06.390 Message: lib/net: Defining dependency "net" 00:01:06.390 Message: lib/meter: Defining dependency "meter" 00:01:06.390 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.390 Message: lib/pci: Defining dependency "pci" 00:01:06.390 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.390 Message: lib/metrics: Defining dependency "metrics" 00:01:06.390 Message: lib/hash: Defining dependency "hash" 00:01:06.390 Message: lib/timer: Defining dependency "timer" 00:01:06.390 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:06.390 Compiler for C supports arguments -mavx2: YES (cached) 00:01:06.390 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:06.390 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:06.390 Message: lib/acl: Defining dependency "acl" 00:01:06.390 Message: lib/bbdev: Defining dependency "bbdev" 00:01:06.390 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:06.390 Run-time dependency libelf found: YES 0.190 00:01:06.390 Message: lib/bpf: Defining dependency "bpf" 00:01:06.390 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:06.390 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.390 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.390 Message: lib/distributor: Defining dependency "distributor" 00:01:06.390 Message: lib/efd: Defining dependency "efd" 00:01:06.390 Message: lib/eventdev: Defining dependency "eventdev" 00:01:06.390 Message: lib/gpudev: Defining dependency "gpudev" 00:01:06.390 Message: lib/gro: Defining dependency "gro" 00:01:06.390 Message: lib/gso: Defining dependency "gso" 00:01:06.390 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:06.390 Message: lib/jobstats: Defining dependency "jobstats" 00:01:06.390 Message: lib/latencystats: Defining dependency "latencystats" 00:01:06.390 Message: lib/lpm: Defining dependency "lpm" 00:01:06.390 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:06.390 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:06.390 Message: lib/member: Defining dependency "member" 00:01:06.390 Message: lib/pcapng: Defining dependency "pcapng" 00:01:06.390 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.390 Message: lib/power: Defining dependency "power" 00:01:06.390 Message: lib/rawdev: Defining dependency "rawdev" 00:01:06.390 Message: lib/regexdev: Defining dependency "regexdev" 00:01:06.390 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.390 Message: lib/rib: Defining dependency "rib" 00:01:06.390 Message: lib/reorder: Defining dependency "reorder" 00:01:06.390 Message: lib/sched: Defining dependency "sched" 00:01:06.390 Message: lib/security: Defining dependency "security" 00:01:06.390 Message: lib/stack: Defining dependency "stack" 00:01:06.390 Has header "linux/userfaultfd.h" : YES 00:01:06.390 Message: lib/vhost: Defining dependency "vhost" 00:01:06.390 Message: lib/ipsec: Defining dependency "ipsec" 00:01:06.390 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.390 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:06.390 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:06.390 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:06.390 Message: lib/fib: Defining dependency "fib" 00:01:06.390 Message: lib/port: Defining dependency "port" 00:01:06.390 Message: lib/pdump: Defining dependency "pdump" 00:01:06.390 Message: lib/table: Defining dependency "table" 00:01:06.390 Message: lib/pipeline: Defining dependency "pipeline" 00:01:06.390 Message: lib/graph: Defining dependency "graph" 00:01:06.390 Message: lib/node: Defining dependency "node" 00:01:06.390 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.390 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.390 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.390 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.390 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:06.390 Compiler for C supports arguments -Wno-unused-value: YES 00:01:07.772 Compiler for C supports arguments -Wno-format: YES 00:01:07.772 Compiler for C supports arguments -Wno-format-security: YES 00:01:07.772 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:07.772 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:07.772 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:07.772 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:07.772 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:07.772 Compiler for C supports arguments -mavx2: YES (cached) 00:01:07.772 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:07.772 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:07.772 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:07.772 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:07.772 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:07.772 Program doxygen found: YES (/usr/bin/doxygen) 00:01:07.772 Configuring doxy-api.conf using configuration 00:01:07.772 Program sphinx-build found: NO 00:01:07.772 Configuring rte_build_config.h using configuration 00:01:07.772 Message: 00:01:07.772 ================= 00:01:07.772 Applications Enabled 00:01:07.772 ================= 00:01:07.772 00:01:07.772 apps: 00:01:07.772 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:07.772 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:07.772 test-security-perf, 00:01:07.772 00:01:07.772 Message: 00:01:07.772 ================= 00:01:07.772 Libraries Enabled 00:01:07.772 ================= 00:01:07.772 00:01:07.772 libs: 00:01:07.772 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:07.772 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:07.772 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:07.772 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:07.772 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:07.772 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:07.772 table, pipeline, graph, node, 00:01:07.772 00:01:07.772 Message: 00:01:07.772 =============== 00:01:07.772 Drivers Enabled 00:01:07.772 =============== 00:01:07.772 00:01:07.772 common: 00:01:07.772 00:01:07.772 bus: 00:01:07.772 pci, vdev, 00:01:07.772 mempool: 00:01:07.772 ring, 00:01:07.772 dma: 00:01:07.772 00:01:07.772 net: 00:01:07.772 i40e, 00:01:07.772 raw: 00:01:07.772 00:01:07.772 crypto: 00:01:07.772 00:01:07.772 compress: 00:01:07.772 00:01:07.772 regex: 00:01:07.772 00:01:07.772 vdpa: 00:01:07.772 00:01:07.772 event: 00:01:07.772 00:01:07.772 baseband: 00:01:07.772 00:01:07.772 gpu: 00:01:07.772 00:01:07.772 00:01:07.772 Message: 00:01:07.772 ================= 00:01:07.772 Content Skipped 00:01:07.772 ================= 00:01:07.772 00:01:07.772 apps: 00:01:07.772 00:01:07.772 libs: 00:01:07.772 kni: explicitly disabled via build config (deprecated lib) 00:01:07.772 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:07.772 00:01:07.772 drivers: 00:01:07.772 common/cpt: not in enabled drivers build config 00:01:07.772 common/dpaax: not in enabled drivers build config 00:01:07.772 common/iavf: not in enabled drivers build config 00:01:07.772 common/idpf: not in enabled drivers build config 00:01:07.772 common/mvep: not in enabled drivers build config 00:01:07.772 common/octeontx: not in enabled drivers build config 00:01:07.772 bus/auxiliary: not in enabled drivers build config 00:01:07.772 bus/dpaa: not in enabled drivers build config 00:01:07.772 bus/fslmc: not in enabled drivers build config 00:01:07.772 bus/ifpga: not in enabled drivers build config 00:01:07.772 bus/vmbus: not in enabled drivers build config 00:01:07.772 common/cnxk: not in enabled drivers build config 00:01:07.772 common/mlx5: not in enabled drivers build config 00:01:07.772 common/qat: not in enabled drivers build config 00:01:07.772 common/sfc_efx: not in enabled drivers build config 00:01:07.772 mempool/bucket: not in enabled drivers build config 00:01:07.772 mempool/cnxk: not in enabled drivers build config 00:01:07.772 mempool/dpaa: not in enabled drivers build config 00:01:07.772 mempool/dpaa2: not in enabled drivers build config 00:01:07.772 mempool/octeontx: not in enabled drivers build config 00:01:07.772 mempool/stack: not in enabled drivers build config 00:01:07.772 dma/cnxk: not in enabled drivers build config 00:01:07.772 dma/dpaa: not in enabled drivers build config 00:01:07.772 dma/dpaa2: not in enabled drivers build config 00:01:07.772 dma/hisilicon: not in enabled drivers build config 00:01:07.772 dma/idxd: not in enabled drivers build config 00:01:07.772 dma/ioat: not in enabled drivers build config 00:01:07.772 dma/skeleton: not in enabled drivers build config 00:01:07.772 net/af_packet: not in enabled drivers build config 00:01:07.772 net/af_xdp: not in enabled drivers build config 00:01:07.772 net/ark: not in enabled drivers build config 00:01:07.772 net/atlantic: not in enabled drivers build config 00:01:07.772 net/avp: not in enabled drivers build config 00:01:07.772 net/axgbe: not in enabled drivers build config 00:01:07.772 net/bnx2x: not in enabled drivers build config 00:01:07.772 net/bnxt: not in enabled drivers build config 00:01:07.772 net/bonding: not in enabled drivers build config 00:01:07.772 net/cnxk: not in enabled drivers build config 00:01:07.772 net/cxgbe: not in enabled drivers build config 00:01:07.772 net/dpaa: not in enabled drivers build config 00:01:07.772 net/dpaa2: not in enabled drivers build config 00:01:07.772 net/e1000: not in enabled drivers build config 00:01:07.772 net/ena: not in enabled drivers build config 00:01:07.772 net/enetc: not in enabled drivers build config 00:01:07.772 net/enetfec: not in enabled drivers build config 00:01:07.772 net/enic: not in enabled drivers build config 00:01:07.772 net/failsafe: not in enabled drivers build config 00:01:07.772 net/fm10k: not in enabled drivers build config 00:01:07.772 net/gve: not in enabled drivers build config 00:01:07.772 net/hinic: not in enabled drivers build config 00:01:07.772 net/hns3: not in enabled drivers build config 00:01:07.772 net/iavf: not in enabled drivers build config 00:01:07.772 net/ice: not in enabled drivers build config 00:01:07.772 net/idpf: not in enabled drivers build config 00:01:07.772 net/igc: not in enabled drivers build config 00:01:07.772 net/ionic: not in enabled drivers build config 00:01:07.772 net/ipn3ke: not in enabled drivers build config 00:01:07.772 net/ixgbe: not in enabled drivers build config 00:01:07.772 net/kni: not in enabled drivers build config 00:01:07.772 net/liquidio: not in enabled drivers build config 00:01:07.772 net/mana: not in enabled drivers build config 00:01:07.772 net/memif: not in enabled drivers build config 00:01:07.772 net/mlx4: not in enabled drivers build config 00:01:07.772 net/mlx5: not in enabled drivers build config 00:01:07.772 net/mvneta: not in enabled drivers build config 00:01:07.772 net/mvpp2: not in enabled drivers build config 00:01:07.772 net/netvsc: not in enabled drivers build config 00:01:07.772 net/nfb: not in enabled drivers build config 00:01:07.772 net/nfp: not in enabled drivers build config 00:01:07.772 net/ngbe: not in enabled drivers build config 00:01:07.772 net/null: not in enabled drivers build config 00:01:07.772 net/octeontx: not in enabled drivers build config 00:01:07.772 net/octeon_ep: not in enabled drivers build config 00:01:07.772 net/pcap: not in enabled drivers build config 00:01:07.772 net/pfe: not in enabled drivers build config 00:01:07.772 net/qede: not in enabled drivers build config 00:01:07.772 net/ring: not in enabled drivers build config 00:01:07.772 net/sfc: not in enabled drivers build config 00:01:07.772 net/softnic: not in enabled drivers build config 00:01:07.772 net/tap: not in enabled drivers build config 00:01:07.772 net/thunderx: not in enabled drivers build config 00:01:07.772 net/txgbe: not in enabled drivers build config 00:01:07.772 net/vdev_netvsc: not in enabled drivers build config 00:01:07.772 net/vhost: not in enabled drivers build config 00:01:07.772 net/virtio: not in enabled drivers build config 00:01:07.772 net/vmxnet3: not in enabled drivers build config 00:01:07.772 raw/cnxk_bphy: not in enabled drivers build config 00:01:07.772 raw/cnxk_gpio: not in enabled drivers build config 00:01:07.772 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:07.772 raw/ifpga: not in enabled drivers build config 00:01:07.772 raw/ntb: not in enabled drivers build config 00:01:07.772 raw/skeleton: not in enabled drivers build config 00:01:07.772 crypto/armv8: not in enabled drivers build config 00:01:07.772 crypto/bcmfs: not in enabled drivers build config 00:01:07.772 crypto/caam_jr: not in enabled drivers build config 00:01:07.772 crypto/ccp: not in enabled drivers build config 00:01:07.772 crypto/cnxk: not in enabled drivers build config 00:01:07.772 crypto/dpaa_sec: not in enabled drivers build config 00:01:07.772 crypto/dpaa2_sec: not in enabled drivers build config 00:01:07.772 crypto/ipsec_mb: not in enabled drivers build config 00:01:07.772 crypto/mlx5: not in enabled drivers build config 00:01:07.772 crypto/mvsam: not in enabled drivers build config 00:01:07.772 crypto/nitrox: not in enabled drivers build config 00:01:07.772 crypto/null: not in enabled drivers build config 00:01:07.772 crypto/octeontx: not in enabled drivers build config 00:01:07.772 crypto/openssl: not in enabled drivers build config 00:01:07.772 crypto/scheduler: not in enabled drivers build config 00:01:07.772 crypto/uadk: not in enabled drivers build config 00:01:07.772 crypto/virtio: not in enabled drivers build config 00:01:07.772 compress/isal: not in enabled drivers build config 00:01:07.773 compress/mlx5: not in enabled drivers build config 00:01:07.773 compress/octeontx: not in enabled drivers build config 00:01:07.773 compress/zlib: not in enabled drivers build config 00:01:07.773 regex/mlx5: not in enabled drivers build config 00:01:07.773 regex/cn9k: not in enabled drivers build config 00:01:07.773 vdpa/ifc: not in enabled drivers build config 00:01:07.773 vdpa/mlx5: not in enabled drivers build config 00:01:07.773 vdpa/sfc: not in enabled drivers build config 00:01:07.773 event/cnxk: not in enabled drivers build config 00:01:07.773 event/dlb2: not in enabled drivers build config 00:01:07.773 event/dpaa: not in enabled drivers build config 00:01:07.773 event/dpaa2: not in enabled drivers build config 00:01:07.773 event/dsw: not in enabled drivers build config 00:01:07.773 event/opdl: not in enabled drivers build config 00:01:07.773 event/skeleton: not in enabled drivers build config 00:01:07.773 event/sw: not in enabled drivers build config 00:01:07.773 event/octeontx: not in enabled drivers build config 00:01:07.773 baseband/acc: not in enabled drivers build config 00:01:07.773 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:07.773 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:07.773 baseband/la12xx: not in enabled drivers build config 00:01:07.773 baseband/null: not in enabled drivers build config 00:01:07.773 baseband/turbo_sw: not in enabled drivers build config 00:01:07.773 gpu/cuda: not in enabled drivers build config 00:01:07.773 00:01:07.773 00:01:07.773 Build targets in project: 316 00:01:07.773 00:01:07.773 DPDK 22.11.4 00:01:07.773 00:01:07.773 User defined options 00:01:07.773 libdir : lib 00:01:07.773 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.773 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:07.773 c_link_args : 00:01:07.773 enable_docs : false 00:01:07.773 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:07.773 enable_kmods : false 00:01:07.773 machine : native 00:01:07.773 tests : false 00:01:07.773 00:01:07.773 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.773 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:07.773 23:44:42 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 00:01:07.773 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:08.033 [1/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:08.033 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:08.033 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:08.033 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:08.033 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:08.033 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:08.033 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:08.033 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:08.033 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:08.033 [10/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:08.033 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:08.033 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:08.033 [13/745] Linking static target lib/librte_kvargs.a 00:01:08.033 [14/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:08.033 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:08.033 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:08.033 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:08.033 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:08.033 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:08.033 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:08.033 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:08.033 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:08.033 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:08.296 [24/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:08.296 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:08.296 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:08.296 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:08.296 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:08.296 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:08.296 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:08.296 [31/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:08.296 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:08.296 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:08.296 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:08.296 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:08.296 [36/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:08.296 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:08.296 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:08.296 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:08.296 [40/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:08.296 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:08.296 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:08.296 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:08.296 [44/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:08.296 [45/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:08.558 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:08.558 [47/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:08.558 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:08.558 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.558 [50/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:08.558 [51/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:08.558 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:08.558 [53/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:08.558 [54/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.558 [55/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.558 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:08.558 [57/745] Generating lib/rte_eal_def with a custom command 00:01:08.558 [58/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:08.558 [59/745] Generating lib/rte_ring_def with a custom command 00:01:08.558 [60/745] Generating lib/rte_eal_mingw with a custom command 00:01:08.558 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:08.558 [62/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.558 [63/745] Generating lib/rte_ring_mingw with a custom command 00:01:08.558 [64/745] Generating lib/rte_rcu_def with a custom command 00:01:08.558 [65/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.558 [66/745] Generating lib/rte_rcu_mingw with a custom command 00:01:08.558 [67/745] Linking target lib/librte_kvargs.so.23.0 00:01:08.558 [68/745] Generating lib/rte_mempool_mingw with a custom command 00:01:08.558 [69/745] Generating lib/rte_mempool_def with a custom command 00:01:08.558 [70/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.558 [71/745] Generating lib/rte_mbuf_def with a custom command 00:01:08.558 [72/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:08.559 [73/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:08.559 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.559 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:08.559 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.819 [77/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.819 [78/745] Generating lib/rte_net_def with a custom command 00:01:08.819 [79/745] Generating lib/rte_net_mingw with a custom command 00:01:08.819 [80/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.819 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.819 [82/745] Linking static target lib/librte_ring.a 00:01:08.819 [83/745] Generating lib/rte_meter_def with a custom command 00:01:08.819 [84/745] Generating lib/rte_meter_mingw with a custom command 00:01:08.819 [85/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:08.819 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.819 [87/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:08.819 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:09.081 [89/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:09.081 [90/745] Linking static target lib/librte_meter.a 00:01:09.081 [91/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:09.081 [92/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:09.081 [93/745] Linking static target lib/librte_telemetry.a 00:01:09.346 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:09.346 [95/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.346 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:09.346 [97/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.346 [98/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:09.605 [99/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.605 [100/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:09.605 [101/745] Linking target lib/librte_telemetry.so.23.0 00:01:09.605 [102/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:09.865 [103/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:09.865 [104/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:09.865 [105/745] Generating lib/rte_ethdev_def with a custom command 00:01:09.865 [106/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:09.865 [107/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:09.865 [108/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:09.865 [109/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:09.865 [110/745] Generating lib/rte_pci_def with a custom command 00:01:09.865 [111/745] Generating lib/rte_pci_mingw with a custom command 00:01:09.865 [112/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:09.865 [113/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:09.865 [114/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:10.125 [115/745] Linking static target lib/librte_pci.a 00:01:10.125 [116/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:10.125 [117/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:10.125 [118/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:10.125 [119/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:10.125 [120/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:10.125 [121/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:10.125 [122/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:10.125 [123/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:10.125 [124/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:10.125 [125/745] Generating lib/rte_cmdline_def with a custom command 00:01:10.125 [126/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:10.125 [127/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:10.125 [128/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:10.125 [129/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:10.392 [130/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:10.392 [131/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:10.392 [132/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:10.392 [133/745] Generating lib/rte_metrics_mingw with a custom command 00:01:10.392 [134/745] Generating lib/rte_metrics_def with a custom command 00:01:10.392 [135/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.392 [136/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:10.392 [137/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:10.392 [138/745] Linking static target lib/librte_net.a 00:01:10.392 [139/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:10.392 [140/745] Generating lib/rte_hash_mingw with a custom command 00:01:10.392 [141/745] Generating lib/rte_hash_def with a custom command 00:01:10.392 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:10.392 [143/745] Generating lib/rte_timer_def with a custom command 00:01:10.392 [144/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:10.392 [145/745] Generating lib/rte_timer_mingw with a custom command 00:01:10.392 [146/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:10.392 [147/745] Linking static target lib/librte_rcu.a 00:01:10.392 [148/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:10.392 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:10.392 [150/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:10.651 [151/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:10.651 [152/745] Generating lib/rte_acl_def with a custom command 00:01:10.651 [153/745] Generating lib/rte_acl_mingw with a custom command 00:01:10.651 [154/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:10.651 [155/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:10.651 [156/745] Generating lib/rte_bbdev_def with a custom command 00:01:10.651 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:10.651 [158/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:10.651 [159/745] Generating lib/rte_bitratestats_def with a custom command 00:01:10.651 [160/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.911 [161/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:10.911 [162/745] Linking static target lib/librte_mempool.a 00:01:10.911 [163/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:10.911 [164/745] Linking static target lib/librte_eal.a 00:01:10.911 [165/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.911 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:10.911 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:11.171 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:11.171 [169/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:11.171 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:11.171 [171/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:11.171 [172/745] Generating lib/rte_bpf_def with a custom command 00:01:11.435 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:11.435 [174/745] Generating lib/rte_bpf_mingw with a custom command 00:01:11.435 [175/745] Linking static target lib/librte_timer.a 00:01:11.435 [176/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:11.435 [177/745] Linking static target lib/librte_cmdline.a 00:01:11.435 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:11.435 [179/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:11.435 [180/745] Linking static target lib/librte_metrics.a 00:01:11.435 [181/745] Generating lib/rte_cfgfile_def with a custom command 00:01:11.697 [182/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:11.697 [183/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:11.960 [184/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:11.960 [185/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:11.960 [186/745] Linking static target lib/librte_bitratestats.a 00:01:11.960 [187/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.960 [188/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:11.960 [189/745] Generating lib/rte_compressdev_def with a custom command 00:01:11.960 [190/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:11.960 [191/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:11.960 [192/745] Linking static target lib/librte_cfgfile.a 00:01:11.960 [193/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:11.960 [194/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.960 [195/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.960 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:11.960 [197/745] Generating lib/rte_cryptodev_def with a custom command 00:01:11.960 [198/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:12.225 [199/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:12.225 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:12.225 [201/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.225 [202/745] Generating lib/rte_distributor_def with a custom command 00:01:12.225 [203/745] Generating lib/rte_distributor_mingw with a custom command 00:01:12.225 [204/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:12.225 [205/745] Generating lib/rte_efd_def with a custom command 00:01:12.225 [206/745] Generating lib/rte_efd_mingw with a custom command 00:01:12.490 [207/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:12.490 [208/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:12.490 [209/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:12.490 [210/745] Linking static target lib/librte_bbdev.a 00:01:12.490 [211/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.490 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:12.751 [213/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.013 [214/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:13.013 [215/745] Generating lib/rte_eventdev_def with a custom command 00:01:13.013 [216/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:13.013 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:13.013 [218/745] Generating lib/rte_gpudev_def with a custom command 00:01:13.275 [219/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:13.275 [220/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:13.275 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:13.275 [222/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.275 [223/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:13.536 [224/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:13.536 [225/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:13.536 [226/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:13.536 [227/745] Generating lib/rte_gro_def with a custom command 00:01:13.536 [228/745] Generating lib/rte_gro_mingw with a custom command 00:01:13.536 [229/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:13.536 [230/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:13.536 [231/745] Linking static target lib/librte_compressdev.a 00:01:13.536 [232/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:13.797 [233/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:13.797 [234/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:13.797 [235/745] Linking static target lib/librte_bpf.a 00:01:13.797 [236/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:14.060 [237/745] Generating lib/rte_gso_def with a custom command 00:01:14.060 [238/745] Generating lib/rte_gso_mingw with a custom command 00:01:14.326 [239/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:14.326 [240/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.326 [241/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:14.326 [242/745] Linking static target lib/librte_distributor.a 00:01:14.937 [243/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:14.937 [244/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:14.937 [245/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.937 [246/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:14.937 [247/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:14.937 [248/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:14.937 [249/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:14.937 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:14.937 [251/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:14.937 [252/745] Generating lib/rte_ip_frag_def with a custom command 00:01:14.937 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:14.937 [254/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.937 [255/745] Generating lib/rte_jobstats_def with a custom command 00:01:14.937 [256/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:14.937 [257/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:14.937 [258/745] Generating lib/rte_latencystats_def with a custom command 00:01:14.937 [259/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:14.937 [260/745] Generating lib/rte_lpm_def with a custom command 00:01:14.937 [261/745] Generating lib/rte_lpm_mingw with a custom command 00:01:14.937 [262/745] Linking static target lib/librte_gpudev.a 00:01:15.199 [263/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:15.199 [264/745] Linking static target lib/librte_jobstats.a 00:01:15.199 [265/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.199 [266/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:15.199 [267/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:15.199 [268/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:15.199 [269/745] Linking static target lib/librte_gro.a 00:01:15.199 [270/745] Generating lib/rte_member_def with a custom command 00:01:15.199 [271/745] Generating lib/rte_member_mingw with a custom command 00:01:15.462 [272/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:15.462 [273/745] Generating lib/rte_pcapng_def with a custom command 00:01:15.462 [274/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:15.462 [275/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:15.462 [276/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.722 [277/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.722 [278/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:15.722 [279/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:15.722 [280/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:15.722 [281/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:15.722 [282/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:15.987 [283/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:15.987 [284/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:16.250 [285/745] Linking static target lib/librte_hash.a 00:01:16.250 [286/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:16.250 [287/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:16.250 [288/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:16.250 [289/745] Generating lib/rte_power_mingw with a custom command 00:01:16.250 [290/745] Generating lib/rte_power_def with a custom command 00:01:16.250 [291/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:16.250 [292/745] Generating lib/rte_rawdev_def with a custom command 00:01:16.513 [293/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:16.513 [294/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:16.513 [295/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:16.513 [296/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:16.513 [297/745] Linking static target lib/librte_gso.a 00:01:16.513 [298/745] Linking static target lib/librte_latencystats.a 00:01:16.513 [299/745] Generating lib/rte_regexdev_def with a custom command 00:01:16.513 [300/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:16.513 [301/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.513 [302/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:16.513 [303/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:16.513 [304/745] Generating lib/rte_dmadev_def with a custom command 00:01:16.513 [305/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:16.513 [306/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:16.513 [307/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:16.513 [308/745] Linking static target lib/librte_ethdev.a 00:01:16.513 [309/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:16.513 [310/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:16.513 [311/745] Linking static target lib/acl/libavx2_tmp.a 00:01:16.513 [312/745] Generating lib/rte_rib_def with a custom command 00:01:16.513 [313/745] Generating lib/rte_rib_mingw with a custom command 00:01:16.513 [314/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:16.513 [315/745] Generating lib/rte_reorder_mingw with a custom command 00:01:16.513 [316/745] Generating lib/rte_reorder_def with a custom command 00:01:16.776 [317/745] Generating lib/rte_sched_def with a custom command 00:01:16.776 [318/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.776 [319/745] Generating lib/rte_sched_mingw with a custom command 00:01:16.776 [320/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.776 [321/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:16.776 [322/745] Generating lib/rte_security_mingw with a custom command 00:01:16.776 [323/745] Generating lib/rte_security_def with a custom command 00:01:16.776 [324/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:16.776 [325/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:16.776 [326/745] Generating lib/rte_stack_def with a custom command 00:01:16.776 [327/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:17.038 [328/745] Generating lib/rte_stack_mingw with a custom command 00:01:17.038 [329/745] Linking static target lib/librte_efd.a 00:01:17.038 [330/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:17.038 [331/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:17.038 [332/745] Linking static target lib/librte_ip_frag.a 00:01:17.038 [333/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:17.038 [334/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:17.038 [335/745] Linking static target lib/librte_stack.a 00:01:17.038 [336/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:17.038 [337/745] Linking static target lib/acl/libavx512_tmp.a 00:01:17.038 [338/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:17.038 [339/745] Linking static target lib/librte_rawdev.a 00:01:17.038 [340/745] Linking static target lib/librte_acl.a 00:01:17.038 [341/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:17.038 [342/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:17.038 [343/745] Linking static target lib/librte_mbuf.a 00:01:17.300 [344/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.300 [345/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.300 [346/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:17.300 [347/745] Linking static target lib/librte_dmadev.a 00:01:17.300 [348/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.300 [349/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:17.300 [350/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.566 [351/745] Generating lib/rte_vhost_def with a custom command 00:01:17.566 [352/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:17.566 [353/745] Generating lib/rte_vhost_mingw with a custom command 00:01:17.566 [354/745] Linking static target lib/librte_pcapng.a 00:01:17.566 [355/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.567 [356/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:17.832 [357/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.832 [358/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:17.832 [359/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:17.832 [360/745] Linking static target lib/librte_regexdev.a 00:01:18.096 [361/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.096 [362/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.096 [363/745] Generating lib/rte_ipsec_def with a custom command 00:01:18.096 [364/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:18.096 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:18.096 [366/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.096 [367/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:18.360 [368/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:18.360 [369/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:18.360 [370/745] Linking static target lib/librte_lpm.a 00:01:18.360 [371/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:18.360 [372/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:18.360 [373/745] Linking static target lib/librte_reorder.a 00:01:18.360 [374/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:18.360 [375/745] Generating lib/rte_fib_def with a custom command 00:01:18.360 [376/745] Generating lib/rte_fib_mingw with a custom command 00:01:18.360 [377/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:18.624 [378/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:18.624 [379/745] Linking static target lib/librte_eventdev.a 00:01:18.624 [380/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:18.624 [381/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:18.624 [382/745] Linking static target lib/librte_power.a 00:01:18.884 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.884 [384/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.884 [385/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:18.884 [386/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:18.884 [387/745] Linking static target lib/librte_security.a 00:01:18.884 [388/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:19.147 [389/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:19.147 [390/745] Linking static target lib/librte_cryptodev.a 00:01:19.147 [391/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:19.147 [392/745] Linking static target lib/librte_rib.a 00:01:19.147 [393/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.147 [394/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:19.412 [395/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:19.412 [396/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:19.412 [397/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:19.412 [398/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:19.412 [399/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:19.412 [400/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:19.412 [401/745] Generating lib/rte_port_def with a custom command 00:01:19.677 [402/745] Generating lib/rte_port_mingw with a custom command 00:01:19.677 [403/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:19.677 [404/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.677 [405/745] Generating lib/rte_pdump_def with a custom command 00:01:19.677 [406/745] Generating lib/rte_pdump_mingw with a custom command 00:01:19.677 [407/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.937 [408/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:20.200 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:20.200 [410/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.200 [411/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:20.200 [412/745] Linking static target lib/librte_member.a 00:01:20.463 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:20.463 [414/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:20.463 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:20.463 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:20.720 [417/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:20.720 [418/745] Linking static target lib/librte_sched.a 00:01:20.720 [419/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.720 [420/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:20.720 [421/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.984 [422/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:20.984 [423/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:20.985 [424/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:20.985 [425/745] Linking static target lib/librte_fib.a 00:01:21.244 [426/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:21.244 [427/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:21.244 [428/745] Generating lib/rte_table_def with a custom command 00:01:21.244 [429/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.244 [430/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:21.244 [431/745] Generating lib/rte_table_mingw with a custom command 00:01:21.244 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:21.510 [433/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:21.510 [434/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:21.510 [435/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:21.510 [436/745] Generating lib/rte_pipeline_def with a custom command 00:01:21.510 [437/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:21.510 [438/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:21.510 [439/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.780 [440/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.780 [441/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:21.780 [442/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:22.038 [443/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:22.038 [444/745] Generating lib/rte_graph_def with a custom command 00:01:22.038 [445/745] Generating lib/rte_graph_mingw with a custom command 00:01:22.299 [446/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:22.299 [447/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:22.299 [448/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:22.299 [449/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.561 [450/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:22.561 [451/745] Linking static target lib/librte_pdump.a 00:01:22.561 [452/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:22.561 [453/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:22.826 [454/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:22.826 [455/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:22.826 [456/745] Linking static target lib/librte_ipsec.a 00:01:22.826 [457/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:22.826 [458/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:22.826 [459/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:22.826 [460/745] Generating lib/rte_node_def with a custom command 00:01:22.826 [461/745] Generating lib/rte_node_mingw with a custom command 00:01:23.087 [462/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.088 [463/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:23.088 [464/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:23.088 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:23.088 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:23.088 [467/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:23.088 [468/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:23.088 [469/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:23.353 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:23.353 [471/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:23.353 [472/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.353 [473/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:23.353 [474/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:23.353 [475/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:23.353 [476/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:23.353 [477/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:23.353 [478/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:23.353 [479/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:23.353 [480/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:23.353 [481/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:23.353 [482/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:23.353 [483/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.615 [484/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:23.615 [485/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:23.615 [486/745] Linking static target lib/librte_table.a 00:01:23.615 [487/745] Linking target lib/librte_eal.so.23.0 00:01:23.615 [488/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:23.615 [489/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:23.615 [490/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:23.877 [491/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:23.877 [492/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:23.877 [493/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:24.142 [494/745] Linking target lib/librte_ring.so.23.0 00:01:24.142 [495/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:24.142 [496/745] Linking target lib/librte_meter.so.23.0 00:01:24.142 [497/745] Linking target lib/librte_pci.so.23.0 00:01:24.142 [498/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:24.142 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:24.142 [500/745] Linking target lib/librte_timer.so.23.0 00:01:24.142 [501/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.142 [502/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:24.142 [503/745] Linking target lib/librte_acl.so.23.0 00:01:24.406 [504/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:24.406 [505/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:24.406 [506/745] Linking target lib/librte_cfgfile.so.23.0 00:01:24.406 [507/745] Linking target lib/librte_rcu.so.23.0 00:01:24.406 [508/745] Linking target lib/librte_mempool.so.23.0 00:01:24.406 [509/745] Linking target lib/librte_jobstats.so.23.0 00:01:24.406 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:24.406 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:24.406 [512/745] Linking target lib/librte_rawdev.so.23.0 00:01:24.406 [513/745] Linking static target lib/librte_port.a 00:01:24.406 [514/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:24.406 [515/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:24.406 [516/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:24.406 [517/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:24.406 [518/745] Linking static target lib/librte_graph.a 00:01:24.406 [519/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.406 [520/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:24.406 [521/745] Linking target lib/librte_dmadev.so.23.0 00:01:24.406 [522/745] Linking static target drivers/librte_bus_vdev.a 00:01:24.406 [523/745] Linking target lib/librte_stack.so.23.0 00:01:24.406 [524/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:24.667 [525/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:24.667 [526/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:24.667 [527/745] Linking target lib/librte_mbuf.so.23.0 00:01:24.667 [528/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:24.667 [529/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.667 [530/745] Linking target lib/librte_rib.so.23.0 00:01:24.667 [531/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:24.667 [532/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:24.929 [533/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:24.929 [534/745] Linking static target drivers/librte_bus_pci.a 00:01:24.929 [535/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:24.929 [536/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:24.929 [537/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.929 [538/745] Linking target lib/librte_bbdev.so.23.0 00:01:24.929 [539/745] Linking target lib/librte_net.so.23.0 00:01:24.929 [540/745] Linking target lib/librte_compressdev.so.23.0 00:01:24.929 [541/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:24.929 [542/745] Linking target lib/librte_distributor.so.23.0 00:01:24.929 [543/745] Linking target lib/librte_cryptodev.so.23.0 00:01:24.929 [544/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:25.194 [545/745] Linking target lib/librte_gpudev.so.23.0 00:01:25.194 [546/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:25.194 [547/745] Linking target lib/librte_regexdev.so.23.0 00:01:25.194 [548/745] Linking target lib/librte_reorder.so.23.0 00:01:25.194 [549/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.194 [550/745] Linking target lib/librte_sched.so.23.0 00:01:25.194 [551/745] Linking target lib/librte_fib.so.23.0 00:01:25.194 [552/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:25.194 [553/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:25.194 [554/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.194 [555/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:25.455 [556/745] Linking target lib/librte_ethdev.so.23.0 00:01:25.455 [557/745] Linking target lib/librte_cmdline.so.23.0 00:01:25.455 [558/745] Linking target lib/librte_hash.so.23.0 00:01:25.455 [559/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:25.455 [560/745] Linking target lib/librte_security.so.23.0 00:01:25.455 [561/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:25.455 [562/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.455 [563/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:25.455 [564/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:25.455 [565/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:25.455 [566/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:25.455 [567/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:25.717 [568/745] Linking target lib/librte_metrics.so.23.0 00:01:25.717 [569/745] Linking target lib/librte_efd.so.23.0 00:01:25.717 [570/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:25.717 [571/745] Linking target lib/librte_bpf.so.23.0 00:01:25.717 [572/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:25.717 [573/745] Linking target lib/librte_eventdev.so.23.0 00:01:25.978 [574/745] Linking target lib/librte_gro.so.23.0 00:01:25.978 [575/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:25.978 [576/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:25.978 [577/745] Linking target lib/librte_gso.so.23.0 00:01:25.978 [578/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:25.978 [579/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.978 [580/745] Linking target lib/librte_ip_frag.so.23.0 00:01:25.978 [581/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.978 [582/745] Linking target lib/librte_bitratestats.so.23.0 00:01:25.978 [583/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:25.978 [584/745] Linking target lib/librte_lpm.so.23.0 00:01:25.978 [585/745] Linking target lib/librte_member.so.23.0 00:01:25.978 [586/745] Linking target lib/librte_latencystats.so.23.0 00:01:26.245 [587/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:26.245 [588/745] Linking target lib/librte_power.so.23.0 00:01:26.245 [589/745] Linking target lib/librte_pcapng.so.23.0 00:01:26.245 [590/745] Linking target lib/librte_ipsec.so.23.0 00:01:26.245 [591/745] Linking target lib/librte_graph.so.23.0 00:01:26.245 [592/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:26.245 [593/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:26.508 [594/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:26.508 [595/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:26.508 [596/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:26.508 [597/745] Linking target lib/librte_port.so.23.0 00:01:26.508 [598/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:26.508 [599/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.508 [600/745] Linking target lib/librte_pdump.so.23.0 00:01:26.508 [601/745] Linking static target drivers/librte_mempool_ring.a 00:01:26.508 [602/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:26.508 [603/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:26.508 [604/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:26.768 [605/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:26.768 [606/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:26.768 [607/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:26.768 [608/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:26.768 [609/745] Linking target lib/librte_table.so.23.0 00:01:26.768 [610/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:27.028 [611/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:27.028 [612/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:27.028 [613/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:27.028 [614/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:27.289 [615/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:27.550 [616/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:27.809 [617/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:27.809 [618/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:28.074 [619/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:28.074 [620/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:28.340 [621/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:28.340 [622/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:28.340 [623/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:28.340 [624/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:28.599 [625/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:28.599 [626/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:28.862 [627/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:28.862 [628/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:28.862 [629/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:28.862 [630/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:28.862 [631/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:29.120 [632/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:29.120 [633/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:29.120 [634/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:29.120 [635/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:29.399 [636/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:29.695 [637/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:29.973 [638/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:29.973 [639/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:29.973 [640/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:30.546 [641/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:30.546 [642/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:30.546 [643/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:31.118 [644/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:31.118 [645/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:31.118 [646/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:31.118 [647/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:31.118 [648/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:31.689 [649/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:31.952 [650/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:31.952 [651/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:32.214 [652/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:32.214 [653/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:32.214 [654/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:32.477 [655/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:32.477 [656/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:32.477 [657/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:32.477 [658/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:32.738 [659/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:32.738 [660/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:32.738 [661/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:32.738 [662/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:32.738 [663/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:32.738 [664/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:32.738 [665/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:32.999 [666/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:33.263 [667/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:33.263 [668/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:33.527 [669/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:33.527 [670/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:33.527 [671/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:33.527 [672/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:33.527 [673/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:33.788 [674/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:33.788 [675/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:34.053 [676/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:34.321 [677/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:34.321 [678/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:34.321 [679/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:34.321 [680/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:34.321 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:34.321 [682/745] Linking static target drivers/librte_net_i40e.a 00:01:34.321 [683/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:34.321 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:34.579 [685/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:34.838 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:34.838 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:34.838 [688/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:34.838 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:35.100 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:35.100 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:35.100 [692/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.358 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:35.358 [694/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:35.358 [695/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:35.358 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:35.358 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:35.615 [698/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:35.615 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:35.615 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:35.873 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:35.873 [702/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:35.873 [703/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:35.873 [704/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:35.873 [705/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:35.873 [706/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:36.131 [707/745] Linking static target lib/librte_node.a 00:01:36.131 [708/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.389 [709/745] Linking target lib/librte_node.so.23.0 00:01:36.389 [710/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:36.389 [711/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:36.646 [712/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:37.215 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:37.215 [714/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:37.472 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:38.036 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:38.965 [717/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:39.222 [718/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:40.152 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:45.410 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.512 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.512 [722/745] Linking static target lib/librte_vhost.a 00:02:17.512 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.512 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:39.429 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:39.429 [726/745] Linking static target lib/librte_pipeline.a 00:02:39.429 [727/745] Linking target app/dpdk-test-regex 00:02:39.429 [728/745] Linking target app/dpdk-test-pipeline 00:02:39.429 [729/745] Linking target app/dpdk-proc-info 00:02:39.429 [730/745] Linking target app/dpdk-test-eventdev 00:02:39.429 [731/745] Linking target app/dpdk-pdump 00:02:39.429 [732/745] Linking target app/dpdk-test-compress-perf 00:02:39.429 [733/745] Linking target app/dpdk-test-acl 00:02:39.429 [734/745] Linking target app/dpdk-test-security-perf 00:02:39.429 [735/745] Linking target app/dpdk-test-fib 00:02:39.429 [736/745] Linking target app/dpdk-dumpcap 00:02:39.429 [737/745] Linking target app/dpdk-test-gpudev 00:02:39.429 [738/745] Linking target app/dpdk-test-crypto-perf 00:02:39.429 [739/745] Linking target app/dpdk-testpmd 00:02:39.429 [740/745] Linking target app/dpdk-test-cmdline 00:02:39.429 [741/745] Linking target app/dpdk-test-flow-perf 00:02:39.429 [742/745] Linking target app/dpdk-test-sad 00:02:39.429 [743/745] Linking target app/dpdk-test-bbdev 00:02:39.429 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.429 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:39.429 23:46:13 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 install 00:02:39.429 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:39.429 [0/1] Installing files. 00:02:39.429 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:39.432 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.433 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.434 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:39.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:39.436 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.436 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.437 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.698 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.698 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.698 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.698 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:39.698 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.698 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.699 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.700 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:39.988 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:39.988 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:39.988 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:39.988 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:39.988 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:39.989 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:39.989 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:39.989 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:39.989 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:39.989 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:39.989 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:39.989 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:39.989 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:39.989 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:39.989 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:39.989 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:39.989 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:39.989 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:39.989 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:39.989 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:39.989 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:39.989 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:39.989 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:39.989 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:39.989 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:39.989 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:39.989 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:39.989 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:39.989 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:39.989 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:39.989 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:39.989 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:39.989 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:39.989 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:39.989 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:39.989 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:39.989 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:39.989 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:39.989 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:39.989 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:39.989 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:39.989 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:39.989 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:39.989 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:39.989 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:39.989 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:39.989 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:39.989 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:39.989 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:39.989 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:39.989 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:39.989 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:39.989 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:39.989 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:39.989 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:39.989 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:39.989 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:39.989 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:39.989 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:39.989 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:39.989 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:39.989 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:39.989 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:39.989 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:39.989 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:39.989 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:39.989 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:39.989 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:39.989 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:39.989 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:39.989 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:39.989 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:39.989 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:39.989 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:39.989 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:39.989 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:39.989 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:39.989 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:39.989 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:39.989 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:39.989 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:39.989 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:39.989 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:39.989 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:39.989 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:39.989 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:39.989 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:39.989 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:39.989 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:39.989 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:39.989 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:39.989 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:39.989 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:39.989 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:39.989 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:39.989 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:39.989 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:39.989 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:39.989 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:39.989 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:39.989 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:39.989 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:39.989 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:39.989 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:39.989 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:39.990 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:39.990 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:39.990 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:39.990 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:39.990 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:39.990 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:39.990 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:39.990 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:39.990 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:39.990 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:39.990 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:39.990 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:39.990 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:39.990 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:39.990 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:39.990 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:39.990 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:39.990 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:39.990 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:39.990 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:39.990 23:46:14 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:39.990 23:46:14 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:39.990 23:46:14 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:39.990 23:46:14 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.990 00:02:39.990 real 1m37.839s 00:02:39.990 user 15m9.991s 00:02:39.990 sys 1m46.749s 00:02:39.990 23:46:14 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:39.990 23:46:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:39.990 ************************************ 00:02:39.990 END TEST build_native_dpdk 00:02:39.990 ************************************ 00:02:39.990 23:46:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:39.990 23:46:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:39.990 23:46:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:39.990 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:39.990 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:39.990 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:40.249 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:40.508 Using 'verbs' RDMA provider 00:02:51.059 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:03.266 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:03.266 Creating mk/config.mk...done. 00:03:03.266 Creating mk/cc.flags.mk...done. 00:03:03.266 Type 'make' to build. 00:03:03.266 23:46:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:03:03.266 23:46:35 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:03.266 23:46:35 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:03.266 23:46:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.266 ************************************ 00:03:03.266 START TEST make 00:03:03.266 ************************************ 00:03:03.266 23:46:35 make -- common/autotest_common.sh@1121 -- $ make -j32 00:03:03.266 make[1]: Nothing to be done for 'all'. 00:03:03.529 The Meson build system 00:03:03.529 Version: 1.3.1 00:03:03.529 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:03.529 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:03.529 Build type: native build 00:03:03.529 Project name: libvfio-user 00:03:03.529 Project version: 0.0.1 00:03:03.529 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:03.529 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:03.529 Host machine cpu family: x86_64 00:03:03.529 Host machine cpu: x86_64 00:03:03.529 Run-time dependency threads found: YES 00:03:03.529 Library dl found: YES 00:03:03.529 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:03.529 Run-time dependency json-c found: YES 0.17 00:03:03.529 Run-time dependency cmocka found: YES 1.1.7 00:03:03.529 Program pytest-3 found: NO 00:03:03.529 Program flake8 found: NO 00:03:03.529 Program misspell-fixer found: NO 00:03:03.529 Program restructuredtext-lint found: NO 00:03:03.529 Program valgrind found: YES (/usr/bin/valgrind) 00:03:03.529 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.529 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.529 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.529 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.529 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:03.529 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:03.529 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:03.529 Build targets in project: 8 00:03:03.529 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:03.529 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:03.529 00:03:03.529 libvfio-user 0.0.1 00:03:03.529 00:03:03.529 User defined options 00:03:03.529 buildtype : debug 00:03:03.529 default_library: shared 00:03:03.529 libdir : /usr/local/lib 00:03:03.529 00:03:03.529 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.146 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:04.445 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:04.445 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:04.445 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:04.445 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:04.445 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:04.445 [6/37] Compiling C object samples/null.p/null.c.o 00:03:04.445 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:04.445 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:04.445 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:04.445 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:04.445 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:04.445 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:04.445 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:04.445 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:04.445 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:04.445 [16/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:04.445 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:04.445 [18/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:04.445 [19/37] Compiling C object samples/server.p/server.c.o 00:03:04.445 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:04.715 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:04.715 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:04.715 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:04.715 [24/37] Compiling C object samples/client.p/client.c.o 00:03:04.715 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:04.715 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:04.715 [27/37] Linking target samples/client 00:03:04.715 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:04.715 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:04.977 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:04.977 [31/37] Linking target test/unit_tests 00:03:04.977 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:05.245 [33/37] Linking target samples/server 00:03:05.245 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:05.245 [35/37] Linking target samples/lspci 00:03:05.245 [36/37] Linking target samples/gpio-pci-idio-16 00:03:05.245 [37/37] Linking target samples/null 00:03:05.245 INFO: autodetecting backend as ninja 00:03:05.245 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.245 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.817 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.817 ninja: no work to do. 00:03:20.706 CC lib/log/log.o 00:03:20.706 CC lib/log/log_flags.o 00:03:20.706 CC lib/log/log_deprecated.o 00:03:20.706 CC lib/ut_mock/mock.o 00:03:20.706 CC lib/ut/ut.o 00:03:20.706 LIB libspdk_log.a 00:03:20.706 LIB libspdk_ut.a 00:03:20.706 LIB libspdk_ut_mock.a 00:03:20.706 SO libspdk_ut.so.2.0 00:03:20.706 SO libspdk_ut_mock.so.6.0 00:03:20.706 SO libspdk_log.so.7.0 00:03:20.706 SYMLINK libspdk_ut_mock.so 00:03:20.706 SYMLINK libspdk_ut.so 00:03:20.706 SYMLINK libspdk_log.so 00:03:20.706 CXX lib/trace_parser/trace.o 00:03:20.706 CC lib/dma/dma.o 00:03:20.706 CC lib/util/base64.o 00:03:20.706 CC lib/ioat/ioat.o 00:03:20.706 CC lib/util/bit_array.o 00:03:20.706 CC lib/util/cpuset.o 00:03:20.706 CC lib/util/crc16.o 00:03:20.706 CC lib/util/crc32.o 00:03:20.706 CC lib/util/crc32c.o 00:03:20.706 CC lib/util/crc32_ieee.o 00:03:20.706 CC lib/util/crc64.o 00:03:20.706 CC lib/util/dif.o 00:03:20.706 CC lib/util/fd.o 00:03:20.706 CC lib/util/file.o 00:03:20.706 CC lib/util/hexlify.o 00:03:20.706 CC lib/util/iov.o 00:03:20.706 CC lib/util/math.o 00:03:20.706 CC lib/util/strerror_tls.o 00:03:20.706 CC lib/util/pipe.o 00:03:20.706 CC lib/util/string.o 00:03:20.706 CC lib/util/uuid.o 00:03:20.706 CC lib/util/fd_group.o 00:03:20.706 CC lib/util/xor.o 00:03:20.706 CC lib/util/zipf.o 00:03:20.706 CC lib/vfio_user/host/vfio_user_pci.o 00:03:20.706 CC lib/vfio_user/host/vfio_user.o 00:03:20.706 LIB libspdk_dma.a 00:03:20.706 SO libspdk_dma.so.4.0 00:03:20.706 SYMLINK libspdk_dma.so 00:03:20.706 LIB libspdk_vfio_user.a 00:03:20.706 SO libspdk_vfio_user.so.5.0 00:03:20.706 LIB libspdk_ioat.a 00:03:20.964 SO libspdk_ioat.so.7.0 00:03:20.964 SYMLINK libspdk_vfio_user.so 00:03:20.964 SYMLINK libspdk_ioat.so 00:03:21.221 LIB libspdk_util.a 00:03:21.221 SO libspdk_util.so.9.0 00:03:21.480 SYMLINK libspdk_util.so 00:03:21.480 LIB libspdk_trace_parser.a 00:03:21.480 SO libspdk_trace_parser.so.5.0 00:03:21.480 SYMLINK libspdk_trace_parser.so 00:03:21.480 CC lib/idxd/idxd.o 00:03:21.480 CC lib/conf/conf.o 00:03:21.480 CC lib/idxd/idxd_user.o 00:03:21.480 CC lib/idxd/idxd_kernel.o 00:03:21.480 CC lib/vmd/vmd.o 00:03:21.480 CC lib/vmd/led.o 00:03:21.480 CC lib/rdma/common.o 00:03:21.480 CC lib/rdma/rdma_verbs.o 00:03:21.480 CC lib/json/json_parse.o 00:03:21.480 CC lib/json/json_util.o 00:03:21.480 CC lib/json/json_write.o 00:03:21.480 CC lib/env_dpdk/env.o 00:03:21.480 CC lib/env_dpdk/memory.o 00:03:21.480 CC lib/env_dpdk/pci.o 00:03:21.480 CC lib/env_dpdk/init.o 00:03:21.480 CC lib/env_dpdk/threads.o 00:03:21.480 CC lib/env_dpdk/pci_ioat.o 00:03:21.480 CC lib/env_dpdk/pci_virtio.o 00:03:21.480 CC lib/env_dpdk/pci_vmd.o 00:03:21.480 CC lib/env_dpdk/pci_idxd.o 00:03:21.480 CC lib/env_dpdk/pci_event.o 00:03:21.480 CC lib/env_dpdk/sigbus_handler.o 00:03:21.480 CC lib/env_dpdk/pci_dpdk.o 00:03:21.480 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.480 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.737 LIB libspdk_conf.a 00:03:21.737 SO libspdk_conf.so.6.0 00:03:21.995 SYMLINK libspdk_conf.so 00:03:21.995 LIB libspdk_json.a 00:03:21.995 LIB libspdk_rdma.a 00:03:21.995 SO libspdk_json.so.6.0 00:03:21.995 SO libspdk_rdma.so.6.0 00:03:21.995 SYMLINK libspdk_json.so 00:03:21.995 SYMLINK libspdk_rdma.so 00:03:22.252 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.252 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.252 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.252 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.252 LIB libspdk_idxd.a 00:03:22.252 SO libspdk_idxd.so.12.0 00:03:22.252 SYMLINK libspdk_idxd.so 00:03:22.509 LIB libspdk_vmd.a 00:03:22.509 SO libspdk_vmd.so.6.0 00:03:22.509 SYMLINK libspdk_vmd.so 00:03:22.509 LIB libspdk_jsonrpc.a 00:03:22.509 SO libspdk_jsonrpc.so.6.0 00:03:22.509 SYMLINK libspdk_jsonrpc.so 00:03:22.766 CC lib/rpc/rpc.o 00:03:23.024 LIB libspdk_rpc.a 00:03:23.024 SO libspdk_rpc.so.6.0 00:03:23.024 SYMLINK libspdk_rpc.so 00:03:23.281 CC lib/trace/trace.o 00:03:23.281 CC lib/trace/trace_flags.o 00:03:23.281 CC lib/trace/trace_rpc.o 00:03:23.281 CC lib/keyring/keyring.o 00:03:23.281 CC lib/keyring/keyring_rpc.o 00:03:23.281 CC lib/notify/notify.o 00:03:23.281 CC lib/notify/notify_rpc.o 00:03:23.281 LIB libspdk_notify.a 00:03:23.539 SO libspdk_notify.so.6.0 00:03:23.539 LIB libspdk_keyring.a 00:03:23.539 SYMLINK libspdk_notify.so 00:03:23.539 SO libspdk_keyring.so.1.0 00:03:23.539 LIB libspdk_trace.a 00:03:23.539 SO libspdk_trace.so.10.0 00:03:23.539 SYMLINK libspdk_keyring.so 00:03:23.539 SYMLINK libspdk_trace.so 00:03:23.798 CC lib/sock/sock.o 00:03:23.798 CC lib/sock/sock_rpc.o 00:03:23.798 CC lib/thread/thread.o 00:03:23.798 CC lib/thread/iobuf.o 00:03:23.798 LIB libspdk_env_dpdk.a 00:03:24.056 SO libspdk_env_dpdk.so.14.0 00:03:24.056 SYMLINK libspdk_env_dpdk.so 00:03:24.056 LIB libspdk_sock.a 00:03:24.314 SO libspdk_sock.so.9.0 00:03:24.314 SYMLINK libspdk_sock.so 00:03:24.314 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.314 CC lib/nvme/nvme_ctrlr.o 00:03:24.314 CC lib/nvme/nvme_fabric.o 00:03:24.315 CC lib/nvme/nvme_ns_cmd.o 00:03:24.315 CC lib/nvme/nvme_ns.o 00:03:24.315 CC lib/nvme/nvme_pcie_common.o 00:03:24.315 CC lib/nvme/nvme_pcie.o 00:03:24.315 CC lib/nvme/nvme_qpair.o 00:03:24.315 CC lib/nvme/nvme.o 00:03:24.315 CC lib/nvme/nvme_quirks.o 00:03:24.315 CC lib/nvme/nvme_transport.o 00:03:24.315 CC lib/nvme/nvme_discovery.o 00:03:24.315 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.315 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.315 CC lib/nvme/nvme_tcp.o 00:03:24.315 CC lib/nvme/nvme_opal.o 00:03:24.572 CC lib/nvme/nvme_io_msg.o 00:03:24.572 CC lib/nvme/nvme_poll_group.o 00:03:24.572 CC lib/nvme/nvme_zns.o 00:03:24.572 CC lib/nvme/nvme_stubs.o 00:03:24.572 CC lib/nvme/nvme_auth.o 00:03:24.572 CC lib/nvme/nvme_cuse.o 00:03:24.572 CC lib/nvme/nvme_vfio_user.o 00:03:24.572 CC lib/nvme/nvme_rdma.o 00:03:25.508 LIB libspdk_thread.a 00:03:25.508 SO libspdk_thread.so.10.0 00:03:25.766 SYMLINK libspdk_thread.so 00:03:25.766 CC lib/accel/accel.o 00:03:25.766 CC lib/virtio/virtio.o 00:03:25.766 CC lib/init/json_config.o 00:03:25.766 CC lib/accel/accel_rpc.o 00:03:25.766 CC lib/virtio/virtio_vhost_user.o 00:03:25.766 CC lib/init/subsystem.o 00:03:25.766 CC lib/init/subsystem_rpc.o 00:03:25.766 CC lib/accel/accel_sw.o 00:03:25.766 CC lib/virtio/virtio_vfio_user.o 00:03:25.766 CC lib/init/rpc.o 00:03:25.766 CC lib/virtio/virtio_pci.o 00:03:25.766 CC lib/vfu_tgt/tgt_endpoint.o 00:03:25.766 CC lib/vfu_tgt/tgt_rpc.o 00:03:25.766 CC lib/blob/blobstore.o 00:03:25.766 CC lib/blob/request.o 00:03:25.766 CC lib/blob/zeroes.o 00:03:25.766 CC lib/blob/blob_bs_dev.o 00:03:26.332 LIB libspdk_init.a 00:03:26.332 LIB libspdk_vfu_tgt.a 00:03:26.332 SO libspdk_init.so.5.0 00:03:26.332 LIB libspdk_virtio.a 00:03:26.332 SO libspdk_vfu_tgt.so.3.0 00:03:26.332 SO libspdk_virtio.so.7.0 00:03:26.332 SYMLINK libspdk_init.so 00:03:26.332 SYMLINK libspdk_vfu_tgt.so 00:03:26.332 SYMLINK libspdk_virtio.so 00:03:26.591 CC lib/event/app.o 00:03:26.591 CC lib/event/reactor.o 00:03:26.591 CC lib/event/log_rpc.o 00:03:26.591 CC lib/event/app_rpc.o 00:03:26.591 CC lib/event/scheduler_static.o 00:03:26.850 LIB libspdk_event.a 00:03:27.125 SO libspdk_event.so.13.0 00:03:27.125 LIB libspdk_accel.a 00:03:27.125 SYMLINK libspdk_event.so 00:03:27.125 SO libspdk_accel.so.15.0 00:03:27.125 SYMLINK libspdk_accel.so 00:03:27.125 LIB libspdk_nvme.a 00:03:27.387 CC lib/bdev/bdev.o 00:03:27.387 CC lib/bdev/bdev_rpc.o 00:03:27.387 CC lib/bdev/bdev_zone.o 00:03:27.387 CC lib/bdev/part.o 00:03:27.387 CC lib/bdev/scsi_nvme.o 00:03:27.387 SO libspdk_nvme.so.13.0 00:03:27.647 SYMLINK libspdk_nvme.so 00:03:29.023 LIB libspdk_blob.a 00:03:29.023 SO libspdk_blob.so.11.0 00:03:29.023 SYMLINK libspdk_blob.so 00:03:29.282 CC lib/lvol/lvol.o 00:03:29.282 CC lib/blobfs/blobfs.o 00:03:29.282 CC lib/blobfs/tree.o 00:03:29.847 LIB libspdk_bdev.a 00:03:29.847 SO libspdk_bdev.so.15.0 00:03:29.847 SYMLINK libspdk_bdev.so 00:03:30.113 CC lib/ftl/ftl_core.o 00:03:30.113 CC lib/ftl/ftl_init.o 00:03:30.113 CC lib/ftl/ftl_layout.o 00:03:30.113 CC lib/scsi/dev.o 00:03:30.113 CC lib/ftl/ftl_debug.o 00:03:30.113 CC lib/scsi/lun.o 00:03:30.113 CC lib/ftl/ftl_io.o 00:03:30.113 CC lib/nbd/nbd.o 00:03:30.113 CC lib/scsi/port.o 00:03:30.113 CC lib/scsi/scsi.o 00:03:30.113 CC lib/nvmf/ctrlr.o 00:03:30.113 CC lib/ftl/ftl_sb.o 00:03:30.113 CC lib/scsi/scsi_bdev.o 00:03:30.113 CC lib/nbd/nbd_rpc.o 00:03:30.113 CC lib/ftl/ftl_l2p.o 00:03:30.113 CC lib/ftl/ftl_l2p_flat.o 00:03:30.113 CC lib/nvmf/ctrlr_discovery.o 00:03:30.113 CC lib/scsi/scsi_pr.o 00:03:30.113 CC lib/nvmf/ctrlr_bdev.o 00:03:30.113 CC lib/ftl/ftl_nv_cache.o 00:03:30.114 CC lib/scsi/scsi_rpc.o 00:03:30.114 CC lib/ftl/ftl_band.o 00:03:30.114 CC lib/scsi/task.o 00:03:30.114 CC lib/nvmf/subsystem.o 00:03:30.114 CC lib/nvmf/nvmf.o 00:03:30.114 CC lib/ftl/ftl_band_ops.o 00:03:30.114 CC lib/nvmf/nvmf_rpc.o 00:03:30.114 CC lib/ftl/ftl_writer.o 00:03:30.114 CC lib/ublk/ublk.o 00:03:30.114 CC lib/nvmf/transport.o 00:03:30.114 LIB libspdk_lvol.a 00:03:30.373 SO libspdk_lvol.so.10.0 00:03:30.373 LIB libspdk_blobfs.a 00:03:30.373 SYMLINK libspdk_lvol.so 00:03:30.373 SO libspdk_blobfs.so.10.0 00:03:30.373 CC lib/ftl/ftl_rq.o 00:03:30.373 CC lib/ftl/ftl_reloc.o 00:03:30.373 CC lib/ublk/ublk_rpc.o 00:03:30.373 CC lib/ftl/ftl_l2p_cache.o 00:03:30.373 CC lib/ftl/ftl_p2l.o 00:03:30.373 CC lib/nvmf/tcp.o 00:03:30.373 CC lib/nvmf/stubs.o 00:03:30.373 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.640 SYMLINK libspdk_blobfs.so 00:03:30.640 CC lib/nvmf/mdns_server.o 00:03:30.640 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.640 CC lib/nvmf/vfio_user.o 00:03:30.640 CC lib/nvmf/rdma.o 00:03:30.640 CC lib/nvmf/auth.o 00:03:30.640 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.640 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.640 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.640 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.901 LIB libspdk_nbd.a 00:03:30.901 SO libspdk_nbd.so.7.0 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.901 SYMLINK libspdk_nbd.so 00:03:30.901 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.901 CC lib/ftl/utils/ftl_conf.o 00:03:31.198 CC lib/ftl/utils/ftl_md.o 00:03:31.198 CC lib/ftl/utils/ftl_mempool.o 00:03:31.198 LIB libspdk_scsi.a 00:03:31.198 CC lib/ftl/utils/ftl_bitmap.o 00:03:31.198 CC lib/ftl/utils/ftl_property.o 00:03:31.198 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:31.198 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:31.198 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:31.198 SO libspdk_scsi.so.9.0 00:03:31.198 LIB libspdk_ublk.a 00:03:31.198 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:31.198 SO libspdk_ublk.so.3.0 00:03:31.499 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:31.499 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:31.499 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:31.499 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:31.499 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:31.499 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:31.499 SYMLINK libspdk_scsi.so 00:03:31.499 SYMLINK libspdk_ublk.so 00:03:31.499 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:31.499 CC lib/ftl/base/ftl_base_dev.o 00:03:31.499 CC lib/ftl/base/ftl_base_bdev.o 00:03:31.499 CC lib/ftl/ftl_trace.o 00:03:31.757 CC lib/iscsi/conn.o 00:03:31.757 CC lib/iscsi/init_grp.o 00:03:31.757 CC lib/iscsi/iscsi.o 00:03:31.757 CC lib/iscsi/md5.o 00:03:31.757 CC lib/iscsi/param.o 00:03:31.757 CC lib/iscsi/portal_grp.o 00:03:31.757 CC lib/iscsi/tgt_node.o 00:03:31.757 CC lib/iscsi/iscsi_subsystem.o 00:03:31.757 CC lib/iscsi/iscsi_rpc.o 00:03:31.757 CC lib/iscsi/task.o 00:03:31.757 CC lib/vhost/vhost.o 00:03:31.757 CC lib/vhost/vhost_rpc.o 00:03:31.757 CC lib/vhost/vhost_scsi.o 00:03:31.757 CC lib/vhost/vhost_blk.o 00:03:31.757 CC lib/vhost/rte_vhost_user.o 00:03:32.016 LIB libspdk_ftl.a 00:03:32.274 SO libspdk_ftl.so.9.0 00:03:32.532 SYMLINK libspdk_ftl.so 00:03:33.464 LIB libspdk_iscsi.a 00:03:33.464 LIB libspdk_vhost.a 00:03:33.464 SO libspdk_vhost.so.8.0 00:03:33.464 SO libspdk_iscsi.so.8.0 00:03:33.464 SYMLINK libspdk_vhost.so 00:03:33.464 SYMLINK libspdk_iscsi.so 00:03:33.464 LIB libspdk_nvmf.a 00:03:33.464 SO libspdk_nvmf.so.18.0 00:03:33.722 SYMLINK libspdk_nvmf.so 00:03:33.979 CC module/vfu_device/vfu_virtio.o 00:03:33.979 CC module/vfu_device/vfu_virtio_blk.o 00:03:33.979 CC module/vfu_device/vfu_virtio_scsi.o 00:03:33.979 CC module/vfu_device/vfu_virtio_rpc.o 00:03:33.979 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.979 CC module/accel/dsa/accel_dsa.o 00:03:33.979 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.979 CC module/accel/ioat/accel_ioat.o 00:03:33.979 CC module/accel/error/accel_error.o 00:03:33.979 CC module/accel/error/accel_error_rpc.o 00:03:33.979 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.979 CC module/blob/bdev/blob_bdev.o 00:03:33.979 CC module/keyring/file/keyring.o 00:03:33.979 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.979 CC module/sock/posix/posix.o 00:03:33.979 CC module/keyring/file/keyring_rpc.o 00:03:33.979 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.979 CC module/keyring/linux/keyring.o 00:03:33.979 CC module/keyring/linux/keyring_rpc.o 00:03:33.979 CC module/accel/iaa/accel_iaa.o 00:03:33.979 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.979 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.236 LIB libspdk_env_dpdk_rpc.a 00:03:34.236 LIB libspdk_keyring_file.a 00:03:34.236 SO libspdk_env_dpdk_rpc.so.6.0 00:03:34.236 SO libspdk_keyring_file.so.1.0 00:03:34.236 LIB libspdk_keyring_linux.a 00:03:34.236 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.494 SYMLINK libspdk_keyring_file.so 00:03:34.494 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.494 LIB libspdk_accel_iaa.a 00:03:34.494 SO libspdk_keyring_linux.so.1.0 00:03:34.494 LIB libspdk_scheduler_gscheduler.a 00:03:34.494 LIB libspdk_accel_error.a 00:03:34.494 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.494 LIB libspdk_blob_bdev.a 00:03:34.494 LIB libspdk_accel_ioat.a 00:03:34.494 SO libspdk_accel_iaa.so.3.0 00:03:34.494 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.494 SO libspdk_accel_error.so.2.0 00:03:34.494 SO libspdk_blob_bdev.so.11.0 00:03:34.494 SO libspdk_accel_ioat.so.6.0 00:03:34.494 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.494 SYMLINK libspdk_keyring_linux.so 00:03:34.494 SYMLINK libspdk_accel_iaa.so 00:03:34.494 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.494 SYMLINK libspdk_accel_error.so 00:03:34.494 SYMLINK libspdk_blob_bdev.so 00:03:34.494 SYMLINK libspdk_accel_ioat.so 00:03:34.494 LIB libspdk_scheduler_dynamic.a 00:03:34.494 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.494 LIB libspdk_accel_dsa.a 00:03:34.494 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.494 SO libspdk_accel_dsa.so.5.0 00:03:34.755 SYMLINK libspdk_accel_dsa.so 00:03:34.755 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.755 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.755 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.755 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.755 CC module/bdev/aio/bdev_aio.o 00:03:34.755 CC module/bdev/null/bdev_null.o 00:03:34.755 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.755 CC module/bdev/null/bdev_null_rpc.o 00:03:34.755 CC module/bdev/error/vbdev_error.o 00:03:34.755 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.755 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.755 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.755 CC module/bdev/delay/vbdev_delay.o 00:03:34.755 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.755 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.755 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.755 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.755 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.755 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.755 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.755 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.755 CC module/bdev/malloc/bdev_malloc.o 00:03:34.755 CC module/bdev/ftl/bdev_ftl.o 00:03:34.755 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.755 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.755 CC module/bdev/raid/bdev_raid.o 00:03:34.755 CC module/bdev/nvme/bdev_nvme.o 00:03:34.755 CC module/bdev/split/vbdev_split.o 00:03:34.755 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.755 CC module/bdev/gpt/gpt.o 00:03:35.023 LIB libspdk_vfu_device.a 00:03:35.023 SO libspdk_vfu_device.so.3.0 00:03:35.023 CC module/bdev/gpt/vbdev_gpt.o 00:03:35.023 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.023 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.023 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.023 CC module/bdev/nvme/nvme_rpc.o 00:03:35.282 LIB libspdk_sock_posix.a 00:03:35.282 CC module/bdev/raid/raid0.o 00:03:35.282 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.282 LIB libspdk_blobfs_bdev.a 00:03:35.282 CC module/bdev/nvme/vbdev_opal.o 00:03:35.282 CC module/bdev/raid/raid1.o 00:03:35.282 SO libspdk_sock_posix.so.6.0 00:03:35.282 CC module/bdev/raid/concat.o 00:03:35.282 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.282 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.282 SO libspdk_blobfs_bdev.so.6.0 00:03:35.282 SYMLINK libspdk_vfu_device.so 00:03:35.282 LIB libspdk_bdev_null.a 00:03:35.282 SYMLINK libspdk_sock_posix.so 00:03:35.282 SO libspdk_bdev_null.so.6.0 00:03:35.282 SYMLINK libspdk_blobfs_bdev.so 00:03:35.282 LIB libspdk_bdev_passthru.a 00:03:35.282 LIB libspdk_bdev_ftl.a 00:03:35.282 SO libspdk_bdev_passthru.so.6.0 00:03:35.282 LIB libspdk_bdev_error.a 00:03:35.282 SYMLINK libspdk_bdev_null.so 00:03:35.282 SO libspdk_bdev_error.so.6.0 00:03:35.282 SO libspdk_bdev_ftl.so.6.0 00:03:35.541 LIB libspdk_bdev_zone_block.a 00:03:35.541 LIB libspdk_bdev_delay.a 00:03:35.541 LIB libspdk_bdev_iscsi.a 00:03:35.541 LIB libspdk_bdev_aio.a 00:03:35.541 LIB libspdk_bdev_split.a 00:03:35.541 SO libspdk_bdev_zone_block.so.6.0 00:03:35.541 SO libspdk_bdev_delay.so.6.0 00:03:35.541 SYMLINK libspdk_bdev_passthru.so 00:03:35.541 SO libspdk_bdev_iscsi.so.6.0 00:03:35.541 SO libspdk_bdev_aio.so.6.0 00:03:35.541 SYMLINK libspdk_bdev_error.so 00:03:35.541 SO libspdk_bdev_split.so.6.0 00:03:35.541 SYMLINK libspdk_bdev_ftl.so 00:03:35.541 SYMLINK libspdk_bdev_zone_block.so 00:03:35.541 SYMLINK libspdk_bdev_delay.so 00:03:35.541 SYMLINK libspdk_bdev_iscsi.so 00:03:35.541 SYMLINK libspdk_bdev_aio.so 00:03:35.541 LIB libspdk_bdev_malloc.a 00:03:35.541 SYMLINK libspdk_bdev_split.so 00:03:35.541 SO libspdk_bdev_malloc.so.6.0 00:03:35.541 LIB libspdk_bdev_gpt.a 00:03:35.541 SO libspdk_bdev_gpt.so.6.0 00:03:35.541 SYMLINK libspdk_bdev_malloc.so 00:03:35.541 LIB libspdk_bdev_virtio.a 00:03:35.541 SYMLINK libspdk_bdev_gpt.so 00:03:35.799 SO libspdk_bdev_virtio.so.6.0 00:03:35.799 LIB libspdk_bdev_lvol.a 00:03:35.799 SO libspdk_bdev_lvol.so.6.0 00:03:35.799 SYMLINK libspdk_bdev_virtio.so 00:03:35.799 SYMLINK libspdk_bdev_lvol.so 00:03:36.058 LIB libspdk_bdev_raid.a 00:03:36.317 SO libspdk_bdev_raid.so.6.0 00:03:36.317 SYMLINK libspdk_bdev_raid.so 00:03:37.253 LIB libspdk_bdev_nvme.a 00:03:37.253 SO libspdk_bdev_nvme.so.7.0 00:03:37.253 SYMLINK libspdk_bdev_nvme.so 00:03:37.820 CC module/event/subsystems/vmd/vmd.o 00:03:37.820 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.820 CC module/event/subsystems/keyring/keyring.o 00:03:37.820 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.820 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.820 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.820 CC module/event/subsystems/sock/sock.o 00:03:37.820 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:37.820 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.820 LIB libspdk_event_keyring.a 00:03:37.820 LIB libspdk_event_vhost_blk.a 00:03:37.820 LIB libspdk_event_sock.a 00:03:37.820 LIB libspdk_event_scheduler.a 00:03:37.820 LIB libspdk_event_vfu_tgt.a 00:03:37.820 LIB libspdk_event_vmd.a 00:03:37.820 LIB libspdk_event_iobuf.a 00:03:37.820 SO libspdk_event_keyring.so.1.0 00:03:37.820 SO libspdk_event_vhost_blk.so.3.0 00:03:37.820 SO libspdk_event_sock.so.5.0 00:03:37.820 SO libspdk_event_scheduler.so.4.0 00:03:37.820 SO libspdk_event_vfu_tgt.so.3.0 00:03:37.820 SO libspdk_event_vmd.so.6.0 00:03:38.079 SO libspdk_event_iobuf.so.3.0 00:03:38.079 SYMLINK libspdk_event_keyring.so 00:03:38.079 SYMLINK libspdk_event_vhost_blk.so 00:03:38.079 SYMLINK libspdk_event_sock.so 00:03:38.079 SYMLINK libspdk_event_vfu_tgt.so 00:03:38.079 SYMLINK libspdk_event_scheduler.so 00:03:38.079 SYMLINK libspdk_event_vmd.so 00:03:38.079 SYMLINK libspdk_event_iobuf.so 00:03:38.337 CC module/event/subsystems/accel/accel.o 00:03:38.337 LIB libspdk_event_accel.a 00:03:38.337 SO libspdk_event_accel.so.6.0 00:03:38.596 SYMLINK libspdk_event_accel.so 00:03:38.596 CC module/event/subsystems/bdev/bdev.o 00:03:38.854 LIB libspdk_event_bdev.a 00:03:38.854 SO libspdk_event_bdev.so.6.0 00:03:38.854 SYMLINK libspdk_event_bdev.so 00:03:39.112 CC module/event/subsystems/scsi/scsi.o 00:03:39.112 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.112 CC module/event/subsystems/nbd/nbd.o 00:03:39.112 CC module/event/subsystems/ublk/ublk.o 00:03:39.112 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.369 LIB libspdk_event_nbd.a 00:03:39.369 LIB libspdk_event_ublk.a 00:03:39.369 LIB libspdk_event_scsi.a 00:03:39.369 SO libspdk_event_ublk.so.3.0 00:03:39.369 SO libspdk_event_nbd.so.6.0 00:03:39.369 SO libspdk_event_scsi.so.6.0 00:03:39.369 SYMLINK libspdk_event_ublk.so 00:03:39.369 SYMLINK libspdk_event_nbd.so 00:03:39.369 SYMLINK libspdk_event_scsi.so 00:03:39.369 LIB libspdk_event_nvmf.a 00:03:39.369 SO libspdk_event_nvmf.so.6.0 00:03:39.626 SYMLINK libspdk_event_nvmf.so 00:03:39.626 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.626 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.626 LIB libspdk_event_vhost_scsi.a 00:03:39.884 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.884 LIB libspdk_event_iscsi.a 00:03:39.884 SO libspdk_event_iscsi.so.6.0 00:03:39.884 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.884 SYMLINK libspdk_event_iscsi.so 00:03:39.884 SO libspdk.so.6.0 00:03:39.884 SYMLINK libspdk.so 00:03:40.148 CC app/trace_record/trace_record.o 00:03:40.148 CXX app/trace/trace.o 00:03:40.148 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.148 CC app/spdk_nvme_identify/identify.o 00:03:40.148 CC app/spdk_nvme_perf/perf.o 00:03:40.148 CC app/spdk_top/spdk_top.o 00:03:40.148 CC app/spdk_lspci/spdk_lspci.o 00:03:40.148 TEST_HEADER include/spdk/accel.h 00:03:40.148 TEST_HEADER include/spdk/accel_module.h 00:03:40.148 TEST_HEADER include/spdk/assert.h 00:03:40.148 TEST_HEADER include/spdk/barrier.h 00:03:40.148 TEST_HEADER include/spdk/base64.h 00:03:40.148 TEST_HEADER include/spdk/bdev.h 00:03:40.148 TEST_HEADER include/spdk/bdev_module.h 00:03:40.148 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.148 TEST_HEADER include/spdk/bit_array.h 00:03:40.148 TEST_HEADER include/spdk/bit_pool.h 00:03:40.148 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.148 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.148 TEST_HEADER include/spdk/blobfs.h 00:03:40.148 TEST_HEADER include/spdk/blob.h 00:03:40.410 TEST_HEADER include/spdk/conf.h 00:03:40.410 TEST_HEADER include/spdk/config.h 00:03:40.410 TEST_HEADER include/spdk/cpuset.h 00:03:40.410 TEST_HEADER include/spdk/crc16.h 00:03:40.410 TEST_HEADER include/spdk/crc32.h 00:03:40.410 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.410 TEST_HEADER include/spdk/crc64.h 00:03:40.410 TEST_HEADER include/spdk/dif.h 00:03:40.410 CC app/spdk_dd/spdk_dd.o 00:03:40.410 TEST_HEADER include/spdk/dma.h 00:03:40.410 TEST_HEADER include/spdk/endian.h 00:03:40.410 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.410 TEST_HEADER include/spdk/env.h 00:03:40.410 TEST_HEADER include/spdk/event.h 00:03:40.410 TEST_HEADER include/spdk/fd_group.h 00:03:40.410 TEST_HEADER include/spdk/fd.h 00:03:40.410 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.410 TEST_HEADER include/spdk/file.h 00:03:40.410 TEST_HEADER include/spdk/ftl.h 00:03:40.410 CC app/nvmf_tgt/nvmf_main.o 00:03:40.410 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.410 TEST_HEADER include/spdk/hexlify.h 00:03:40.410 TEST_HEADER include/spdk/histogram_data.h 00:03:40.410 TEST_HEADER include/spdk/idxd.h 00:03:40.410 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.410 TEST_HEADER include/spdk/init.h 00:03:40.410 TEST_HEADER include/spdk/ioat.h 00:03:40.410 CC app/vhost/vhost.o 00:03:40.410 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.410 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.410 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.410 TEST_HEADER include/spdk/json.h 00:03:40.410 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.410 CC examples/util/zipf/zipf.o 00:03:40.410 CC examples/idxd/perf/perf.o 00:03:40.410 TEST_HEADER include/spdk/keyring.h 00:03:40.410 CC examples/ioat/perf/perf.o 00:03:40.410 CC test/event/event_perf/event_perf.o 00:03:40.410 TEST_HEADER include/spdk/keyring_module.h 00:03:40.410 CC examples/sock/hello_world/hello_sock.o 00:03:40.410 CC examples/nvme/hello_world/hello_world.o 00:03:40.410 TEST_HEADER include/spdk/likely.h 00:03:40.410 TEST_HEADER include/spdk/log.h 00:03:40.410 TEST_HEADER include/spdk/lvol.h 00:03:40.410 CC examples/accel/perf/accel_perf.o 00:03:40.411 TEST_HEADER include/spdk/memory.h 00:03:40.411 CC app/spdk_tgt/spdk_tgt.o 00:03:40.411 TEST_HEADER include/spdk/mmio.h 00:03:40.411 TEST_HEADER include/spdk/nbd.h 00:03:40.411 TEST_HEADER include/spdk/notify.h 00:03:40.411 TEST_HEADER include/spdk/nvme.h 00:03:40.411 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.411 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.411 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.411 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.411 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.411 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.411 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.411 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.411 CC test/dma/test_dma/test_dma.o 00:03:40.411 TEST_HEADER include/spdk/nvmf.h 00:03:40.411 CC test/accel/dif/dif.o 00:03:40.411 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.411 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.411 CC test/app/bdev_svc/bdev_svc.o 00:03:40.411 TEST_HEADER include/spdk/opal.h 00:03:40.411 CC test/blobfs/mkfs/mkfs.o 00:03:40.411 CC test/bdev/bdevio/bdevio.o 00:03:40.411 TEST_HEADER include/spdk/opal_spec.h 00:03:40.411 CC examples/blob/hello_world/hello_blob.o 00:03:40.411 CC examples/nvmf/nvmf/nvmf.o 00:03:40.411 TEST_HEADER include/spdk/pci_ids.h 00:03:40.411 CC examples/thread/thread/thread_ex.o 00:03:40.411 TEST_HEADER include/spdk/pipe.h 00:03:40.411 TEST_HEADER include/spdk/queue.h 00:03:40.411 TEST_HEADER include/spdk/reduce.h 00:03:40.411 TEST_HEADER include/spdk/rpc.h 00:03:40.411 TEST_HEADER include/spdk/scheduler.h 00:03:40.411 TEST_HEADER include/spdk/scsi.h 00:03:40.411 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.411 TEST_HEADER include/spdk/sock.h 00:03:40.411 TEST_HEADER include/spdk/stdinc.h 00:03:40.411 TEST_HEADER include/spdk/string.h 00:03:40.411 TEST_HEADER include/spdk/thread.h 00:03:40.676 TEST_HEADER include/spdk/trace.h 00:03:40.676 TEST_HEADER include/spdk/trace_parser.h 00:03:40.676 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.676 TEST_HEADER include/spdk/tree.h 00:03:40.676 TEST_HEADER include/spdk/ublk.h 00:03:40.676 TEST_HEADER include/spdk/util.h 00:03:40.676 LINK spdk_lspci 00:03:40.676 TEST_HEADER include/spdk/uuid.h 00:03:40.676 TEST_HEADER include/spdk/version.h 00:03:40.676 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.676 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.676 TEST_HEADER include/spdk/vhost.h 00:03:40.676 TEST_HEADER include/spdk/vmd.h 00:03:40.676 TEST_HEADER include/spdk/xor.h 00:03:40.676 TEST_HEADER include/spdk/zipf.h 00:03:40.676 CXX test/cpp_headers/accel.o 00:03:40.676 LINK spdk_nvme_discover 00:03:40.676 LINK interrupt_tgt 00:03:40.676 LINK lsvmd 00:03:40.676 LINK event_perf 00:03:40.676 LINK spdk_trace_record 00:03:40.676 LINK zipf 00:03:40.676 LINK nvmf_tgt 00:03:40.676 LINK vhost 00:03:40.938 LINK iscsi_tgt 00:03:40.938 LINK bdev_svc 00:03:40.938 LINK ioat_perf 00:03:40.938 LINK hello_world 00:03:40.938 LINK spdk_tgt 00:03:40.938 LINK hello_bdev 00:03:40.938 LINK mkfs 00:03:40.938 LINK hello_sock 00:03:40.938 CXX test/cpp_headers/accel_module.o 00:03:40.938 LINK hello_blob 00:03:40.938 LINK mem_callbacks 00:03:40.938 LINK spdk_dd 00:03:40.938 LINK spdk_trace 00:03:40.938 LINK thread 00:03:40.938 CC examples/nvme/reconnect/reconnect.o 00:03:41.199 LINK idxd_perf 00:03:41.199 CC test/event/reactor/reactor.o 00:03:41.199 CC examples/ioat/verify/verify.o 00:03:41.199 LINK nvmf 00:03:41.199 CC examples/vmd/led/led.o 00:03:41.199 CXX test/cpp_headers/assert.o 00:03:41.199 CC examples/blob/cli/blobcli.o 00:03:41.199 LINK bdevio 00:03:41.199 LINK test_dma 00:03:41.199 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.199 CXX test/cpp_headers/barrier.o 00:03:41.199 LINK dif 00:03:41.199 CC test/env/vtophys/vtophys.o 00:03:41.199 CXX test/cpp_headers/base64.o 00:03:41.199 CC test/event/reactor_perf/reactor_perf.o 00:03:41.464 CC test/rpc_client/rpc_client_test.o 00:03:41.464 LINK accel_perf 00:03:41.464 LINK reactor 00:03:41.464 CXX test/cpp_headers/bdev.o 00:03:41.464 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.464 CC test/lvol/esnap/esnap.o 00:03:41.464 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:41.464 CXX test/cpp_headers/bdev_module.o 00:03:41.464 CC test/app/histogram_perf/histogram_perf.o 00:03:41.464 CC test/app/jsoncat/jsoncat.o 00:03:41.464 LINK led 00:03:41.464 CXX test/cpp_headers/bdev_zone.o 00:03:41.464 CC test/event/app_repeat/app_repeat.o 00:03:41.464 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.464 CC test/thread/poller_perf/poller_perf.o 00:03:41.464 CXX test/cpp_headers/bit_array.o 00:03:41.464 CC test/nvme/aer/aer.o 00:03:41.725 CC test/event/scheduler/scheduler.o 00:03:41.725 LINK verify 00:03:41.725 LINK vtophys 00:03:41.725 LINK reactor_perf 00:03:41.725 LINK spdk_nvme_perf 00:03:41.725 LINK reconnect 00:03:41.725 LINK spdk_nvme_identify 00:03:41.725 CC examples/nvme/arbitration/arbitration.o 00:03:41.725 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.725 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.725 CXX test/cpp_headers/bit_pool.o 00:03:41.725 LINK rpc_client_test 00:03:41.725 LINK jsoncat 00:03:41.725 LINK spdk_top 00:03:41.725 LINK histogram_perf 00:03:41.725 CC test/nvme/reset/reset.o 00:03:41.725 CC test/env/memory/memory_ut.o 00:03:41.725 LINK app_repeat 00:03:41.725 CC test/nvme/sgl/sgl.o 00:03:41.987 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.987 LINK poller_perf 00:03:41.987 CC test/app/stub/stub.o 00:03:41.987 CXX test/cpp_headers/blob_bdev.o 00:03:41.987 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:41.987 CC examples/nvme/hotplug/hotplug.o 00:03:41.987 LINK env_dpdk_post_init 00:03:41.987 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.987 CC examples/nvme/abort/abort.o 00:03:41.987 CXX test/cpp_headers/blobfs.o 00:03:41.987 CC test/env/pci/pci_ut.o 00:03:41.987 LINK aer 00:03:41.987 LINK scheduler 00:03:42.249 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.249 CC test/nvme/e2edp/nvme_dp.o 00:03:42.249 CC test/nvme/overhead/overhead.o 00:03:42.249 LINK blobcli 00:03:42.249 CC app/fio/nvme/fio_plugin.o 00:03:42.249 LINK nvme_fuzz 00:03:42.249 CXX test/cpp_headers/blob.o 00:03:42.249 CC test/nvme/startup/startup.o 00:03:42.249 CC test/nvme/err_injection/err_injection.o 00:03:42.249 CC test/nvme/reserve/reserve.o 00:03:42.249 LINK stub 00:03:42.249 CC app/fio/bdev/fio_plugin.o 00:03:42.249 CC test/nvme/simple_copy/simple_copy.o 00:03:42.249 LINK reset 00:03:42.515 LINK arbitration 00:03:42.515 LINK sgl 00:03:42.515 CXX test/cpp_headers/conf.o 00:03:42.515 LINK nvme_manage 00:03:42.515 LINK cmb_copy 00:03:42.515 CXX test/cpp_headers/config.o 00:03:42.515 LINK pmr_persistence 00:03:42.515 CXX test/cpp_headers/cpuset.o 00:03:42.515 CC test/nvme/boot_partition/boot_partition.o 00:03:42.515 CXX test/cpp_headers/crc16.o 00:03:42.515 CC test/nvme/connect_stress/connect_stress.o 00:03:42.515 CXX test/cpp_headers/crc32.o 00:03:42.515 CC test/nvme/compliance/nvme_compliance.o 00:03:42.515 CXX test/cpp_headers/crc64.o 00:03:42.515 LINK hotplug 00:03:42.515 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.515 LINK nvme_dp 00:03:42.778 LINK err_injection 00:03:42.778 LINK startup 00:03:42.778 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.778 LINK reserve 00:03:42.778 LINK vhost_fuzz 00:03:42.778 CXX test/cpp_headers/dif.o 00:03:42.778 LINK bdevperf 00:03:42.778 LINK overhead 00:03:42.778 CXX test/cpp_headers/dma.o 00:03:42.778 CXX test/cpp_headers/endian.o 00:03:42.778 CXX test/cpp_headers/env_dpdk.o 00:03:42.778 LINK simple_copy 00:03:42.778 CXX test/cpp_headers/env.o 00:03:42.778 LINK pci_ut 00:03:42.778 LINK abort 00:03:42.778 CXX test/cpp_headers/event.o 00:03:42.778 LINK boot_partition 00:03:42.778 CC test/nvme/fdp/fdp.o 00:03:42.778 CC test/nvme/cuse/cuse.o 00:03:42.778 CXX test/cpp_headers/fd_group.o 00:03:42.778 LINK connect_stress 00:03:43.037 CXX test/cpp_headers/fd.o 00:03:43.037 CXX test/cpp_headers/file.o 00:03:43.037 CXX test/cpp_headers/ftl.o 00:03:43.037 CXX test/cpp_headers/gpt_spec.o 00:03:43.037 CXX test/cpp_headers/hexlify.o 00:03:43.037 CXX test/cpp_headers/histogram_data.o 00:03:43.037 CXX test/cpp_headers/idxd.o 00:03:43.037 CXX test/cpp_headers/idxd_spec.o 00:03:43.037 CXX test/cpp_headers/init.o 00:03:43.037 LINK fused_ordering 00:03:43.037 LINK doorbell_aers 00:03:43.037 CXX test/cpp_headers/ioat.o 00:03:43.037 CXX test/cpp_headers/ioat_spec.o 00:03:43.037 CXX test/cpp_headers/iscsi_spec.o 00:03:43.037 CXX test/cpp_headers/json.o 00:03:43.037 CXX test/cpp_headers/jsonrpc.o 00:03:43.037 CXX test/cpp_headers/keyring.o 00:03:43.037 CXX test/cpp_headers/keyring_module.o 00:03:43.037 LINK spdk_nvme 00:03:43.037 CXX test/cpp_headers/likely.o 00:03:43.037 CXX test/cpp_headers/log.o 00:03:43.298 LINK nvme_compliance 00:03:43.298 CXX test/cpp_headers/lvol.o 00:03:43.298 LINK spdk_bdev 00:03:43.298 CXX test/cpp_headers/memory.o 00:03:43.298 CXX test/cpp_headers/mmio.o 00:03:43.298 LINK memory_ut 00:03:43.298 CXX test/cpp_headers/nbd.o 00:03:43.298 CXX test/cpp_headers/notify.o 00:03:43.298 CXX test/cpp_headers/nvme.o 00:03:43.298 CXX test/cpp_headers/nvme_intel.o 00:03:43.298 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.298 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.298 CXX test/cpp_headers/nvme_spec.o 00:03:43.298 CXX test/cpp_headers/nvme_zns.o 00:03:43.298 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.298 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.298 CXX test/cpp_headers/nvmf.o 00:03:43.298 CXX test/cpp_headers/nvmf_spec.o 00:03:43.298 CXX test/cpp_headers/nvmf_transport.o 00:03:43.298 CXX test/cpp_headers/opal.o 00:03:43.564 CXX test/cpp_headers/opal_spec.o 00:03:43.564 CXX test/cpp_headers/pci_ids.o 00:03:43.564 CXX test/cpp_headers/pipe.o 00:03:43.564 CXX test/cpp_headers/queue.o 00:03:43.564 CXX test/cpp_headers/reduce.o 00:03:43.564 CXX test/cpp_headers/rpc.o 00:03:43.564 CXX test/cpp_headers/scheduler.o 00:03:43.564 CXX test/cpp_headers/scsi.o 00:03:43.564 CXX test/cpp_headers/scsi_spec.o 00:03:43.564 CXX test/cpp_headers/sock.o 00:03:43.564 CXX test/cpp_headers/stdinc.o 00:03:43.564 CXX test/cpp_headers/string.o 00:03:43.564 LINK fdp 00:03:43.564 CXX test/cpp_headers/thread.o 00:03:43.564 CXX test/cpp_headers/trace.o 00:03:43.564 CXX test/cpp_headers/trace_parser.o 00:03:43.564 CXX test/cpp_headers/tree.o 00:03:43.564 CXX test/cpp_headers/ublk.o 00:03:43.564 CXX test/cpp_headers/util.o 00:03:43.823 CXX test/cpp_headers/uuid.o 00:03:43.823 CXX test/cpp_headers/version.o 00:03:43.823 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.823 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.823 CXX test/cpp_headers/vhost.o 00:03:43.823 CXX test/cpp_headers/vmd.o 00:03:43.823 CXX test/cpp_headers/xor.o 00:03:43.823 CXX test/cpp_headers/zipf.o 00:03:44.390 LINK iscsi_fuzz 00:03:44.649 LINK cuse 00:03:48.915 LINK esnap 00:03:48.915 00:03:48.915 real 0m47.213s 00:03:48.915 user 8m18.060s 00:03:48.915 sys 1m44.887s 00:03:48.915 23:47:23 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:48.915 23:47:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.915 ************************************ 00:03:48.915 END TEST make 00:03:48.915 ************************************ 00:03:48.915 23:47:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.915 23:47:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.915 23:47:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.915 23:47:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.915 23:47:23 -- pm/common@44 -- $ pid=1060957 00:03:48.915 23:47:23 -- pm/common@50 -- $ kill -TERM 1060957 00:03:48.915 23:47:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.915 23:47:23 -- pm/common@44 -- $ pid=1060959 00:03:48.915 23:47:23 -- pm/common@50 -- $ kill -TERM 1060959 00:03:48.915 23:47:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:48.915 23:47:23 -- pm/common@44 -- $ pid=1060961 00:03:48.915 23:47:23 -- pm/common@50 -- $ kill -TERM 1060961 00:03:48.915 23:47:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:48.915 23:47:23 -- pm/common@44 -- $ pid=1060990 00:03:48.915 23:47:23 -- pm/common@50 -- $ sudo -E kill -TERM 1060990 00:03:48.915 23:47:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.915 23:47:23 -- nvmf/common.sh@7 -- # uname -s 00:03:48.915 23:47:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.915 23:47:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.915 23:47:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.915 23:47:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.915 23:47:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.915 23:47:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.915 23:47:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.915 23:47:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.915 23:47:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.915 23:47:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.915 23:47:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:03:48.915 23:47:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:03:48.915 23:47:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.915 23:47:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.915 23:47:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:48.915 23:47:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.915 23:47:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.915 23:47:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.915 23:47:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.915 23:47:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.915 23:47:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.915 23:47:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.915 23:47:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.915 23:47:23 -- paths/export.sh@5 -- # export PATH 00:03:48.915 23:47:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.915 23:47:23 -- nvmf/common.sh@47 -- # : 0 00:03:48.915 23:47:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:48.915 23:47:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:48.915 23:47:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.915 23:47:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.915 23:47:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.915 23:47:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:48.915 23:47:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:48.915 23:47:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:48.915 23:47:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.915 23:47:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.915 23:47:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.915 23:47:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.915 23:47:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:48.915 23:47:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.915 23:47:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:48.915 23:47:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.915 23:47:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.915 23:47:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.915 23:47:23 -- spdk/autotest.sh@48 -- # udevadm_pid=1134959 00:03:48.915 23:47:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.915 23:47:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.915 23:47:23 -- pm/common@17 -- # local monitor 00:03:48.915 23:47:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@21 -- # date +%s 00:03:48.915 23:47:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.915 23:47:23 -- pm/common@21 -- # date +%s 00:03:48.915 23:47:23 -- pm/common@25 -- # sleep 1 00:03:48.915 23:47:23 -- pm/common@21 -- # date +%s 00:03:48.915 23:47:23 -- pm/common@21 -- # date +%s 00:03:48.915 23:47:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080043 00:03:48.915 23:47:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080043 00:03:48.915 23:47:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080043 00:03:48.915 23:47:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721080043 00:03:48.915 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080043_collect-vmstat.pm.log 00:03:48.915 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080043_collect-cpu-load.pm.log 00:03:48.915 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080043_collect-cpu-temp.pm.log 00:03:48.915 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721080043_collect-bmc-pm.bmc.pm.log 00:03:49.851 23:47:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.851 23:47:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.851 23:47:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:49.851 23:47:24 -- common/autotest_common.sh@10 -- # set +x 00:03:49.851 23:47:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.851 23:47:24 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:49.851 23:47:24 -- common/autotest_common.sh@10 -- # set +x 00:03:49.851 23:47:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:49.851 23:47:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.851 23:47:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.851 23:47:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:49.851 23:47:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.851 23:47:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:49.851 23:47:24 -- common/autotest_common.sh@1451 -- # uname 00:03:49.851 23:47:24 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:49.851 23:47:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:49.851 23:47:24 -- common/autotest_common.sh@1471 -- # uname 00:03:49.851 23:47:24 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:49.851 23:47:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:49.851 23:47:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:49.851 23:47:24 -- spdk/autotest.sh@72 -- # hash lcov 00:03:49.851 23:47:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:49.851 23:47:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:49.851 --rc lcov_branch_coverage=1 00:03:49.851 --rc lcov_function_coverage=1 00:03:49.851 --rc genhtml_branch_coverage=1 00:03:49.851 --rc genhtml_function_coverage=1 00:03:49.851 --rc genhtml_legend=1 00:03:49.851 --rc geninfo_all_blocks=1 00:03:49.851 ' 00:03:49.851 23:47:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:49.851 --rc lcov_branch_coverage=1 00:03:49.851 --rc lcov_function_coverage=1 00:03:49.851 --rc genhtml_branch_coverage=1 00:03:49.851 --rc genhtml_function_coverage=1 00:03:49.851 --rc genhtml_legend=1 00:03:49.851 --rc geninfo_all_blocks=1 00:03:49.851 ' 00:03:49.851 23:47:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:49.851 --rc lcov_branch_coverage=1 00:03:49.851 --rc lcov_function_coverage=1 00:03:49.851 --rc genhtml_branch_coverage=1 00:03:49.851 --rc genhtml_function_coverage=1 00:03:49.851 --rc genhtml_legend=1 00:03:49.851 --rc geninfo_all_blocks=1 00:03:49.851 --no-external' 00:03:49.851 23:47:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:49.851 --rc lcov_branch_coverage=1 00:03:49.851 --rc lcov_function_coverage=1 00:03:49.851 --rc genhtml_branch_coverage=1 00:03:49.851 --rc genhtml_function_coverage=1 00:03:49.851 --rc genhtml_legend=1 00:03:49.851 --rc geninfo_all_blocks=1 00:03:49.851 --no-external' 00:03:49.851 23:47:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.109 lcov: LCOV version 1.14 00:03:50.109 23:47:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:08.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:20.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:20.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:20.364 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:20.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:20.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:20.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:20.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:24.545 23:47:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:24.545 23:47:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.545 23:47:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.545 23:47:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:24.545 23:47:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.478 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:04:25.478 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:04:25.478 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:04:25.736 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:04:25.736 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:04:25.736 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:04:25.736 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:04:25.736 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:04:25.736 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:04:25.736 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:04:25.736 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:04:25.736 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:04:25.736 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:04:25.736 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:04:25.736 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:04:25.736 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:04:25.736 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:04:25.736 23:48:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:25.736 23:48:00 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:25.736 23:48:00 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:25.736 23:48:00 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:25.736 23:48:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:25.736 23:48:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:25.736 23:48:00 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:25.736 23:48:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.736 23:48:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:25.736 23:48:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:25.736 23:48:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:25.736 23:48:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:25.736 23:48:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:25.736 23:48:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:25.736 23:48:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:25.736 No valid GPT data, bailing 00:04:25.736 23:48:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.736 23:48:00 -- scripts/common.sh@391 -- # pt= 00:04:25.736 23:48:00 -- scripts/common.sh@392 -- # return 1 00:04:25.736 23:48:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:25.736 1+0 records in 00:04:25.736 1+0 records out 00:04:25.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00192868 s, 544 MB/s 00:04:25.993 23:48:00 -- spdk/autotest.sh@118 -- # sync 00:04:25.993 23:48:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:25.993 23:48:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:25.993 23:48:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:27.366 23:48:01 -- spdk/autotest.sh@124 -- # uname -s 00:04:27.366 23:48:01 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:27.366 23:48:01 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:27.366 23:48:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.366 23:48:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.366 23:48:01 -- common/autotest_common.sh@10 -- # set +x 00:04:27.366 ************************************ 00:04:27.366 START TEST setup.sh 00:04:27.366 ************************************ 00:04:27.366 23:48:01 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:27.366 * Looking for test storage... 00:04:27.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.366 23:48:01 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:27.366 23:48:01 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:27.366 23:48:01 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:27.366 23:48:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:27.366 23:48:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.366 23:48:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.366 ************************************ 00:04:27.366 START TEST acl 00:04:27.366 ************************************ 00:04:27.366 23:48:01 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:27.624 * Looking for test storage... 00:04:27.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.624 23:48:01 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:27.624 23:48:01 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:27.624 23:48:01 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.624 23:48:01 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.999 23:48:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:29.000 23:48:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:29.000 23:48:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.000 23:48:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:29.000 23:48:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.000 23:48:03 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.595 Hugepages 00:04:29.595 node hugesize free / total 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 00:04:29.595 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.595 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:29.863 23:48:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:29.863 23:48:04 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.863 23:48:04 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.863 23:48:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.863 ************************************ 00:04:29.863 START TEST denied 00:04:29.863 ************************************ 00:04:29.863 23:48:04 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:29.863 23:48:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:04:29.863 23:48:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:04:29.863 23:48:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:29.863 23:48:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.863 23:48:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.244 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.244 23:48:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.141 00:04:33.141 real 0m3.406s 00:04:33.141 user 0m0.982s 00:04:33.141 sys 0m1.661s 00:04:33.141 23:48:07 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.141 23:48:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:33.141 ************************************ 00:04:33.141 END TEST denied 00:04:33.141 ************************************ 00:04:33.141 23:48:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:33.141 23:48:07 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.141 23:48:07 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.141 23:48:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.400 ************************************ 00:04:33.400 START TEST allowed 00:04:33.400 ************************************ 00:04:33.400 23:48:07 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:33.400 23:48:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:04:33.400 23:48:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:33.400 23:48:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:04:33.400 23:48:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.400 23:48:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.301 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:35.301 23:48:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:35.301 23:48:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:35.301 23:48:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:35.301 23:48:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.301 23:48:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.678 00:04:36.678 real 0m3.262s 00:04:36.678 user 0m0.861s 00:04:36.678 sys 0m1.381s 00:04:36.678 23:48:10 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.678 23:48:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:36.678 ************************************ 00:04:36.678 END TEST allowed 00:04:36.678 ************************************ 00:04:36.678 00:04:36.678 real 0m9.113s 00:04:36.678 user 0m2.855s 00:04:36.678 sys 0m4.597s 00:04:36.678 23:48:10 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.678 23:48:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.678 ************************************ 00:04:36.678 END TEST acl 00:04:36.678 ************************************ 00:04:36.678 23:48:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.678 23:48:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.678 23:48:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.678 23:48:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.678 ************************************ 00:04:36.678 START TEST hugepages 00:04:36.678 ************************************ 00:04:36.678 23:48:11 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.678 * Looking for test storage... 00:04:36.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 30820096 kB' 'MemAvailable: 34338632 kB' 'Buffers: 2704 kB' 'Cached: 15337188 kB' 'SwapCached: 0 kB' 'Active: 12287440 kB' 'Inactive: 3488292 kB' 'Active(anon): 11877836 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 439100 kB' 'Mapped: 160704 kB' 'Shmem: 11441996 kB' 'KReclaimable: 179044 kB' 'Slab: 438872 kB' 'SReclaimable: 179044 kB' 'SUnreclaim: 259828 kB' 'KernelStack: 9856 kB' 'PageTables: 7064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12823584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189616 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.678 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.679 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.680 23:48:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:36.680 23:48:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.680 23:48:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.680 23:48:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.680 ************************************ 00:04:36.680 START TEST default_setup 00:04:36.680 ************************************ 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.680 23:48:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.616 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:37.616 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:04:37.875 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:04:38.820 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32834328 kB' 'MemAvailable: 36352804 kB' 'Buffers: 2704 kB' 'Cached: 15337276 kB' 'SwapCached: 0 kB' 'Active: 12300512 kB' 'Inactive: 3488292 kB' 'Active(anon): 11890908 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452424 kB' 'Mapped: 160268 kB' 'Shmem: 11442084 kB' 'KReclaimable: 178924 kB' 'Slab: 438656 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259732 kB' 'KernelStack: 10080 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12836648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189884 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.820 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32835568 kB' 'MemAvailable: 36354044 kB' 'Buffers: 2704 kB' 'Cached: 15337276 kB' 'SwapCached: 0 kB' 'Active: 12299916 kB' 'Inactive: 3488292 kB' 'Active(anon): 11890312 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451260 kB' 'Mapped: 160484 kB' 'Shmem: 11442084 kB' 'KReclaimable: 178924 kB' 'Slab: 438656 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259732 kB' 'KernelStack: 10080 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189772 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.821 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.822 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32835320 kB' 'MemAvailable: 36353796 kB' 'Buffers: 2704 kB' 'Cached: 15337300 kB' 'SwapCached: 0 kB' 'Active: 12298592 kB' 'Inactive: 3488292 kB' 'Active(anon): 11888988 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450000 kB' 'Mapped: 160248 kB' 'Shmem: 11442108 kB' 'KReclaimable: 178924 kB' 'Slab: 438756 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259832 kB' 'KernelStack: 9872 kB' 'PageTables: 6988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.823 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.824 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.825 nr_hugepages=1024 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.825 resv_hugepages=0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.825 surplus_hugepages=0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.825 anon_hugepages=0 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.825 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32835448 kB' 'MemAvailable: 36353924 kB' 'Buffers: 2704 kB' 'Cached: 15337320 kB' 'SwapCached: 0 kB' 'Active: 12298528 kB' 'Inactive: 3488292 kB' 'Active(anon): 11888924 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449920 kB' 'Mapped: 160248 kB' 'Shmem: 11442128 kB' 'KReclaimable: 178924 kB' 'Slab: 438584 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259660 kB' 'KernelStack: 9872 kB' 'PageTables: 6976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189660 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.826 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 18968900 kB' 'MemUsed: 13865792 kB' 'SwapCached: 0 kB' 'Active: 7567668 kB' 'Inactive: 3334432 kB' 'Active(anon): 7403396 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654848 kB' 'Mapped: 64024 kB' 'AnonPages: 250336 kB' 'Shmem: 7156144 kB' 'KernelStack: 5480 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110768 kB' 'Slab: 258312 kB' 'SReclaimable: 110768 kB' 'SUnreclaim: 147544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.827 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:38.828 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.829 node0=1024 expecting 1024 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.829 00:04:38.829 real 0m2.128s 00:04:38.829 user 0m0.587s 00:04:38.829 sys 0m0.716s 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.829 23:48:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:38.829 ************************************ 00:04:38.829 END TEST default_setup 00:04:38.829 ************************************ 00:04:38.829 23:48:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:38.829 23:48:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.829 23:48:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.829 23:48:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.829 ************************************ 00:04:38.829 START TEST per_node_1G_alloc 00:04:38.829 ************************************ 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.829 23:48:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.764 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:39.764 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.764 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:39.764 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:39.764 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:39.764 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:39.764 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:39.764 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:39.764 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:39.764 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:39.764 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:39.764 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:39.764 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:39.764 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:39.764 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:39.764 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:39.764 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32866620 kB' 'MemAvailable: 36385096 kB' 'Buffers: 2704 kB' 'Cached: 15337388 kB' 'SwapCached: 0 kB' 'Active: 12300348 kB' 'Inactive: 3488292 kB' 'Active(anon): 11890744 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452044 kB' 'Mapped: 160408 kB' 'Shmem: 11442196 kB' 'KReclaimable: 178924 kB' 'Slab: 438532 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259608 kB' 'KernelStack: 10112 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189852 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.764 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.030 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32866672 kB' 'MemAvailable: 36385148 kB' 'Buffers: 2704 kB' 'Cached: 15337388 kB' 'SwapCached: 0 kB' 'Active: 12298928 kB' 'Inactive: 3488292 kB' 'Active(anon): 11889324 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450300 kB' 'Mapped: 160448 kB' 'Shmem: 11442196 kB' 'KReclaimable: 178924 kB' 'Slab: 438496 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259572 kB' 'KernelStack: 9872 kB' 'PageTables: 7056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189756 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.031 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.032 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32865384 kB' 'MemAvailable: 36383860 kB' 'Buffers: 2704 kB' 'Cached: 15337412 kB' 'SwapCached: 0 kB' 'Active: 12298660 kB' 'Inactive: 3488292 kB' 'Active(anon): 11889056 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450036 kB' 'Mapped: 160300 kB' 'Shmem: 11442220 kB' 'KReclaimable: 178924 kB' 'Slab: 438536 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259612 kB' 'KernelStack: 9904 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189724 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.033 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.034 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.035 nr_hugepages=1024 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.035 resv_hugepages=0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.035 surplus_hugepages=0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.035 anon_hugepages=0 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32865384 kB' 'MemAvailable: 36383860 kB' 'Buffers: 2704 kB' 'Cached: 15337432 kB' 'SwapCached: 0 kB' 'Active: 12298700 kB' 'Inactive: 3488292 kB' 'Active(anon): 11889096 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450036 kB' 'Mapped: 160300 kB' 'Shmem: 11442240 kB' 'KReclaimable: 178924 kB' 'Slab: 438536 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259612 kB' 'KernelStack: 9904 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189724 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.035 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.036 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20034888 kB' 'MemUsed: 12799804 kB' 'SwapCached: 0 kB' 'Active: 7568424 kB' 'Inactive: 3334432 kB' 'Active(anon): 7404152 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654864 kB' 'Mapped: 64076 kB' 'AnonPages: 251132 kB' 'Shmem: 7156160 kB' 'KernelStack: 5544 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110768 kB' 'Slab: 258344 kB' 'SReclaimable: 110768 kB' 'SUnreclaim: 147576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.037 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.038 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 12830796 kB' 'MemUsed: 6625692 kB' 'SwapCached: 0 kB' 'Active: 4730256 kB' 'Inactive: 153860 kB' 'Active(anon): 4484924 kB' 'Inactive(anon): 0 kB' 'Active(file): 245332 kB' 'Inactive(file): 153860 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4685320 kB' 'Mapped: 96224 kB' 'AnonPages: 198864 kB' 'Shmem: 4286128 kB' 'KernelStack: 4344 kB' 'PageTables: 3188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68156 kB' 'Slab: 180192 kB' 'SReclaimable: 68156 kB' 'SUnreclaim: 112036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.039 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.040 node0=512 expecting 512 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:40.040 node1=512 expecting 512 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:40.040 00:04:40.040 real 0m1.124s 00:04:40.040 user 0m0.486s 00:04:40.040 sys 0m0.662s 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.040 23:48:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.040 ************************************ 00:04:40.040 END TEST per_node_1G_alloc 00:04:40.040 ************************************ 00:04:40.040 23:48:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:40.040 23:48:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.040 23:48:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.040 23:48:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.040 ************************************ 00:04:40.040 START TEST even_2G_alloc 00:04:40.040 ************************************ 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.040 23:48:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.424 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:41.424 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.424 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:41.424 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:41.424 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:41.424 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:41.424 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:41.424 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:41.424 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:41.424 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:41.424 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:41.424 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:41.424 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:41.424 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:41.424 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:41.424 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:41.424 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32861092 kB' 'MemAvailable: 36379568 kB' 'Buffers: 2704 kB' 'Cached: 15337524 kB' 'SwapCached: 0 kB' 'Active: 12298680 kB' 'Inactive: 3488292 kB' 'Active(anon): 11889076 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449900 kB' 'Mapped: 160416 kB' 'Shmem: 11442332 kB' 'KReclaimable: 178924 kB' 'Slab: 438764 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259840 kB' 'KernelStack: 9872 kB' 'PageTables: 6992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12834948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189788 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.424 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.425 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858916 kB' 'MemAvailable: 36377392 kB' 'Buffers: 2704 kB' 'Cached: 15337524 kB' 'SwapCached: 0 kB' 'Active: 12299428 kB' 'Inactive: 3488292 kB' 'Active(anon): 11889824 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450652 kB' 'Mapped: 160808 kB' 'Shmem: 11442332 kB' 'KReclaimable: 178924 kB' 'Slab: 438780 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259856 kB' 'KernelStack: 9888 kB' 'PageTables: 6972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12836056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189772 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.426 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858008 kB' 'MemAvailable: 36376484 kB' 'Buffers: 2704 kB' 'Cached: 15337552 kB' 'SwapCached: 0 kB' 'Active: 12302168 kB' 'Inactive: 3488292 kB' 'Active(anon): 11892564 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453360 kB' 'Mapped: 160732 kB' 'Shmem: 11442360 kB' 'KReclaimable: 178924 kB' 'Slab: 438796 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259872 kB' 'KernelStack: 9840 kB' 'PageTables: 6828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12839380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189740 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.427 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.428 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.429 nr_hugepages=1024 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.429 resv_hugepages=0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.429 surplus_hugepages=0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.429 anon_hugepages=0 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32853260 kB' 'MemAvailable: 36371736 kB' 'Buffers: 2704 kB' 'Cached: 15337564 kB' 'SwapCached: 0 kB' 'Active: 12304476 kB' 'Inactive: 3488292 kB' 'Active(anon): 11894872 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455656 kB' 'Mapped: 161212 kB' 'Shmem: 11442372 kB' 'KReclaimable: 178924 kB' 'Slab: 438756 kB' 'SReclaimable: 178924 kB' 'SUnreclaim: 259832 kB' 'KernelStack: 9872 kB' 'PageTables: 6936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12841128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189744 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:41.429 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.430 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20048796 kB' 'MemUsed: 12785896 kB' 'SwapCached: 0 kB' 'Active: 7570280 kB' 'Inactive: 3334432 kB' 'Active(anon): 7406008 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654936 kB' 'Mapped: 64060 kB' 'AnonPages: 252916 kB' 'Shmem: 7156232 kB' 'KernelStack: 5592 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110768 kB' 'Slab: 258496 kB' 'SReclaimable: 110768 kB' 'SUnreclaim: 147728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.431 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.432 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 12804464 kB' 'MemUsed: 6652024 kB' 'SwapCached: 0 kB' 'Active: 4730008 kB' 'Inactive: 153860 kB' 'Active(anon): 4484676 kB' 'Inactive(anon): 0 kB' 'Active(file): 245332 kB' 'Inactive(file): 153860 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4685356 kB' 'Mapped: 96668 kB' 'AnonPages: 198528 kB' 'Shmem: 4286164 kB' 'KernelStack: 4344 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68156 kB' 'Slab: 180252 kB' 'SReclaimable: 68156 kB' 'SUnreclaim: 112096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.433 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.434 node0=512 expecting 512 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.434 node1=512 expecting 512 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.434 00:04:41.434 real 0m1.280s 00:04:41.434 user 0m0.567s 00:04:41.434 sys 0m0.745s 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.434 23:48:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.434 ************************************ 00:04:41.434 END TEST even_2G_alloc 00:04:41.434 ************************************ 00:04:41.434 23:48:15 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.434 23:48:15 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.434 23:48:15 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.434 23:48:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.434 ************************************ 00:04:41.434 START TEST odd_alloc 00:04:41.434 ************************************ 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.434 23:48:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.368 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:42.368 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.368 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:42.368 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:42.368 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:42.368 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:42.368 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:42.368 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:42.368 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:42.368 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:42.368 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:42.369 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:42.369 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:42.369 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:42.369 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:42.369 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:42.369 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32846784 kB' 'MemAvailable: 36365256 kB' 'Buffers: 2704 kB' 'Cached: 15337656 kB' 'SwapCached: 0 kB' 'Active: 12303024 kB' 'Inactive: 3488292 kB' 'Active(anon): 11893420 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453740 kB' 'Mapped: 159700 kB' 'Shmem: 11442464 kB' 'KReclaimable: 178916 kB' 'Slab: 438736 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259820 kB' 'KernelStack: 9840 kB' 'PageTables: 6780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12830740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189728 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.636 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32846896 kB' 'MemAvailable: 36365368 kB' 'Buffers: 2704 kB' 'Cached: 15337656 kB' 'SwapCached: 0 kB' 'Active: 12302564 kB' 'Inactive: 3488292 kB' 'Active(anon): 11892960 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453704 kB' 'Mapped: 159964 kB' 'Shmem: 11442464 kB' 'KReclaimable: 178916 kB' 'Slab: 438744 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259828 kB' 'KernelStack: 9824 kB' 'PageTables: 6740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12831124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189696 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.637 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.638 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32845888 kB' 'MemAvailable: 36364360 kB' 'Buffers: 2704 kB' 'Cached: 15337676 kB' 'SwapCached: 0 kB' 'Active: 12296428 kB' 'Inactive: 3488292 kB' 'Active(anon): 11886824 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447572 kB' 'Mapped: 159140 kB' 'Shmem: 11442484 kB' 'KReclaimable: 178916 kB' 'Slab: 438740 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259824 kB' 'KernelStack: 9856 kB' 'PageTables: 6788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12825024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189692 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.639 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.640 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:42.641 nr_hugepages=1025 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.641 resv_hugepages=0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.641 surplus_hugepages=0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.641 anon_hugepages=0 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32845888 kB' 'MemAvailable: 36364360 kB' 'Buffers: 2704 kB' 'Cached: 15337696 kB' 'SwapCached: 0 kB' 'Active: 12296540 kB' 'Inactive: 3488292 kB' 'Active(anon): 11886936 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447684 kB' 'Mapped: 159140 kB' 'Shmem: 11442504 kB' 'KReclaimable: 178916 kB' 'Slab: 438740 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259824 kB' 'KernelStack: 9872 kB' 'PageTables: 6832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12825044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189692 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.641 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.642 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20048368 kB' 'MemUsed: 12786324 kB' 'SwapCached: 0 kB' 'Active: 7566948 kB' 'Inactive: 3334432 kB' 'Active(anon): 7402676 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654964 kB' 'Mapped: 63732 kB' 'AnonPages: 249564 kB' 'Shmem: 7156260 kB' 'KernelStack: 5496 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110760 kB' 'Slab: 258504 kB' 'SReclaimable: 110760 kB' 'SUnreclaim: 147744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.643 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 12797520 kB' 'MemUsed: 6658968 kB' 'SwapCached: 0 kB' 'Active: 4729900 kB' 'Inactive: 153860 kB' 'Active(anon): 4484568 kB' 'Inactive(anon): 0 kB' 'Active(file): 245332 kB' 'Inactive(file): 153860 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4685476 kB' 'Mapped: 95408 kB' 'AnonPages: 198424 kB' 'Shmem: 4286284 kB' 'KernelStack: 4392 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68156 kB' 'Slab: 180236 kB' 'SReclaimable: 68156 kB' 'SUnreclaim: 112080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.644 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:42.645 node0=512 expecting 513 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:42.645 node1=513 expecting 512 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:42.645 00:04:42.645 real 0m1.287s 00:04:42.645 user 0m0.602s 00:04:42.645 sys 0m0.722s 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.645 23:48:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.645 ************************************ 00:04:42.645 END TEST odd_alloc 00:04:42.645 ************************************ 00:04:42.645 23:48:17 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:42.646 23:48:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.646 23:48:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.646 23:48:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.906 ************************************ 00:04:42.906 START TEST custom_alloc 00:04:42.906 ************************************ 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.906 23:48:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.848 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:43.848 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.848 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:43.848 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:43.848 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:43.848 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:43.848 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:43.848 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:43.848 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:43.848 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:43.848 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:43.848 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:43.848 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:43.848 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:43.848 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:43.848 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:43.848 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:43.848 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:43.848 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:43.848 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31794512 kB' 'MemAvailable: 35312984 kB' 'Buffers: 2704 kB' 'Cached: 15337780 kB' 'SwapCached: 0 kB' 'Active: 12302644 kB' 'Inactive: 3488292 kB' 'Active(anon): 11893040 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453696 kB' 'Mapped: 159736 kB' 'Shmem: 11442588 kB' 'KReclaimable: 178916 kB' 'Slab: 438696 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259780 kB' 'KernelStack: 9856 kB' 'PageTables: 6860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12831236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189712 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.849 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31801928 kB' 'MemAvailable: 35320400 kB' 'Buffers: 2704 kB' 'Cached: 15337780 kB' 'SwapCached: 0 kB' 'Active: 12297476 kB' 'Inactive: 3488292 kB' 'Active(anon): 11887872 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448496 kB' 'Mapped: 159216 kB' 'Shmem: 11442588 kB' 'KReclaimable: 178916 kB' 'Slab: 438704 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259788 kB' 'KernelStack: 9888 kB' 'PageTables: 6892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12825132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.850 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.851 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31802388 kB' 'MemAvailable: 35320860 kB' 'Buffers: 2704 kB' 'Cached: 15337804 kB' 'SwapCached: 0 kB' 'Active: 12296828 kB' 'Inactive: 3488292 kB' 'Active(anon): 11887224 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447752 kB' 'Mapped: 159152 kB' 'Shmem: 11442612 kB' 'KReclaimable: 178916 kB' 'Slab: 438704 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259788 kB' 'KernelStack: 9872 kB' 'PageTables: 6832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12825152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189660 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.852 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.853 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:43.854 nr_hugepages=1536 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.854 resv_hugepages=0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.854 surplus_hugepages=0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.854 anon_hugepages=0 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.854 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31802140 kB' 'MemAvailable: 35320612 kB' 'Buffers: 2704 kB' 'Cached: 15337808 kB' 'SwapCached: 0 kB' 'Active: 12296552 kB' 'Inactive: 3488292 kB' 'Active(anon): 11886948 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447472 kB' 'Mapped: 159152 kB' 'Shmem: 11442616 kB' 'KReclaimable: 178916 kB' 'Slab: 438704 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259788 kB' 'KernelStack: 9872 kB' 'PageTables: 6832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12825176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189660 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.855 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20055556 kB' 'MemUsed: 12779136 kB' 'SwapCached: 0 kB' 'Active: 7566776 kB' 'Inactive: 3334432 kB' 'Active(anon): 7402504 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654968 kB' 'Mapped: 63744 kB' 'AnonPages: 249336 kB' 'Shmem: 7156264 kB' 'KernelStack: 5512 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110760 kB' 'Slab: 258492 kB' 'SReclaimable: 110760 kB' 'SUnreclaim: 147732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.856 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.857 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 11746584 kB' 'MemUsed: 7709904 kB' 'SwapCached: 0 kB' 'Active: 4730088 kB' 'Inactive: 153860 kB' 'Active(anon): 4484756 kB' 'Inactive(anon): 0 kB' 'Active(file): 245332 kB' 'Inactive(file): 153860 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4685604 kB' 'Mapped: 95408 kB' 'AnonPages: 198416 kB' 'Shmem: 4286412 kB' 'KernelStack: 4360 kB' 'PageTables: 3156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 68156 kB' 'Slab: 180212 kB' 'SReclaimable: 68156 kB' 'SUnreclaim: 112056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.858 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:43.859 node0=512 expecting 512 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:43.859 node1=1024 expecting 1024 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:43.859 00:04:43.859 real 0m1.123s 00:04:43.859 user 0m0.520s 00:04:43.859 sys 0m0.627s 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.859 23:48:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.859 ************************************ 00:04:43.859 END TEST custom_alloc 00:04:43.859 ************************************ 00:04:43.859 23:48:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:43.859 23:48:18 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.859 23:48:18 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.859 23:48:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.859 ************************************ 00:04:43.859 START TEST no_shrink_alloc 00:04:43.859 ************************************ 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.859 23:48:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.795 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:44.795 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.795 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:44.795 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:44.795 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:44.795 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:44.795 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:44.795 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:44.795 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:44.795 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:44.795 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:44.795 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:44.795 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:44.795 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:44.795 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:44.795 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:44.795 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858776 kB' 'MemAvailable: 36377248 kB' 'Buffers: 2704 kB' 'Cached: 15337904 kB' 'SwapCached: 0 kB' 'Active: 12297048 kB' 'Inactive: 3488292 kB' 'Active(anon): 11887444 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448004 kB' 'Mapped: 159660 kB' 'Shmem: 11442712 kB' 'KReclaimable: 178916 kB' 'Slab: 438588 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259672 kB' 'KernelStack: 9840 kB' 'PageTables: 6784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189676 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.061 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.062 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858776 kB' 'MemAvailable: 36377248 kB' 'Buffers: 2704 kB' 'Cached: 15337908 kB' 'SwapCached: 0 kB' 'Active: 12296856 kB' 'Inactive: 3488292 kB' 'Active(anon): 11887252 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447744 kB' 'Mapped: 159240 kB' 'Shmem: 11442716 kB' 'KReclaimable: 178916 kB' 'Slab: 438588 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259672 kB' 'KernelStack: 9856 kB' 'PageTables: 6780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189628 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.063 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.064 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858788 kB' 'MemAvailable: 36377260 kB' 'Buffers: 2704 kB' 'Cached: 15337924 kB' 'SwapCached: 0 kB' 'Active: 12296928 kB' 'Inactive: 3488292 kB' 'Active(anon): 11887324 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447812 kB' 'Mapped: 159160 kB' 'Shmem: 11442732 kB' 'KReclaimable: 178916 kB' 'Slab: 438580 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259664 kB' 'KernelStack: 9872 kB' 'PageTables: 6828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189628 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.065 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.066 nr_hugepages=1024 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.066 resv_hugepages=0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.066 surplus_hugepages=0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.066 anon_hugepages=0 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.066 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32858788 kB' 'MemAvailable: 36377260 kB' 'Buffers: 2704 kB' 'Cached: 15337928 kB' 'SwapCached: 0 kB' 'Active: 12296484 kB' 'Inactive: 3488292 kB' 'Active(anon): 11886880 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447388 kB' 'Mapped: 159160 kB' 'Shmem: 11442736 kB' 'KReclaimable: 178916 kB' 'Slab: 438580 kB' 'SReclaimable: 178916 kB' 'SUnreclaim: 259664 kB' 'KernelStack: 9840 kB' 'PageTables: 6748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189596 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.067 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 19029936 kB' 'MemUsed: 13804756 kB' 'SwapCached: 0 kB' 'Active: 7567036 kB' 'Inactive: 3334432 kB' 'Active(anon): 7402764 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10654972 kB' 'Mapped: 63752 kB' 'AnonPages: 249608 kB' 'Shmem: 7156268 kB' 'KernelStack: 5512 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110760 kB' 'Slab: 258348 kB' 'SReclaimable: 110760 kB' 'SUnreclaim: 147588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.068 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.069 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.070 node0=1024 expecting 1024 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.070 23:48:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.047 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:46.047 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.047 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:46.047 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:46.047 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:46.047 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:46.047 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:46.047 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:46.047 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:46.047 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:04:46.047 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:04:46.047 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:04:46.047 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:04:46.047 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:04:46.047 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:04:46.047 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:04:46.047 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:04:46.047 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.047 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32862904 kB' 'MemAvailable: 36381408 kB' 'Buffers: 2704 kB' 'Cached: 15338012 kB' 'SwapCached: 0 kB' 'Active: 12303656 kB' 'Inactive: 3488292 kB' 'Active(anon): 11894052 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454428 kB' 'Mapped: 160196 kB' 'Shmem: 11442820 kB' 'KReclaimable: 178980 kB' 'Slab: 438728 kB' 'SReclaimable: 178980 kB' 'SUnreclaim: 259748 kB' 'KernelStack: 9840 kB' 'PageTables: 6784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12831752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189632 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.048 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32867884 kB' 'MemAvailable: 36386388 kB' 'Buffers: 2704 kB' 'Cached: 15338012 kB' 'SwapCached: 0 kB' 'Active: 12298424 kB' 'Inactive: 3488292 kB' 'Active(anon): 11888820 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449176 kB' 'Mapped: 159284 kB' 'Shmem: 11442820 kB' 'KReclaimable: 178980 kB' 'Slab: 438716 kB' 'SReclaimable: 178980 kB' 'SUnreclaim: 259736 kB' 'KernelStack: 9824 kB' 'PageTables: 6720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189580 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.049 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.050 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.051 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32868140 kB' 'MemAvailable: 36386644 kB' 'Buffers: 2704 kB' 'Cached: 15338032 kB' 'SwapCached: 0 kB' 'Active: 12297760 kB' 'Inactive: 3488292 kB' 'Active(anon): 11888156 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448420 kB' 'Mapped: 159168 kB' 'Shmem: 11442840 kB' 'KReclaimable: 178980 kB' 'Slab: 438724 kB' 'SReclaimable: 178980 kB' 'SUnreclaim: 259744 kB' 'KernelStack: 9824 kB' 'PageTables: 6684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189596 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.313 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.314 nr_hugepages=1024 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.314 resv_hugepages=0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.314 surplus_hugepages=0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.314 anon_hugepages=0 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32868140 kB' 'MemAvailable: 36386644 kB' 'Buffers: 2704 kB' 'Cached: 15338056 kB' 'SwapCached: 0 kB' 'Active: 12297824 kB' 'Inactive: 3488292 kB' 'Active(anon): 11888220 kB' 'Inactive(anon): 0 kB' 'Active(file): 409604 kB' 'Inactive(file): 3488292 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448580 kB' 'Mapped: 159168 kB' 'Shmem: 11442864 kB' 'KReclaimable: 178980 kB' 'Slab: 438724 kB' 'SReclaimable: 178980 kB' 'SUnreclaim: 259744 kB' 'KernelStack: 9872 kB' 'PageTables: 6824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12825696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189612 kB' 'VmallocChunk: 0 kB' 'Percpu: 19712 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1141028 kB' 'DirectMap2M: 17655808 kB' 'DirectMap1G: 41943040 kB' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.314 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 19031216 kB' 'MemUsed: 13803476 kB' 'SwapCached: 0 kB' 'Active: 7567488 kB' 'Inactive: 3334432 kB' 'Active(anon): 7403216 kB' 'Inactive(anon): 0 kB' 'Active(file): 164272 kB' 'Inactive(file): 3334432 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10655032 kB' 'Mapped: 63760 kB' 'AnonPages: 250072 kB' 'Shmem: 7156328 kB' 'KernelStack: 5512 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110768 kB' 'Slab: 258324 kB' 'SReclaimable: 110768 kB' 'SUnreclaim: 147556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.315 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.316 node0=1024 expecting 1024 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.316 00:04:46.316 real 0m2.290s 00:04:46.316 user 0m1.017s 00:04:46.316 sys 0m1.331s 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.316 23:48:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.316 ************************************ 00:04:46.316 END TEST no_shrink_alloc 00:04:46.316 ************************************ 00:04:46.316 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:46.316 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.316 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.316 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.316 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.317 23:48:20 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.317 00:04:46.317 real 0m9.660s 00:04:46.317 user 0m3.935s 00:04:46.317 sys 0m5.093s 00:04:46.317 23:48:20 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.317 23:48:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 ************************************ 00:04:46.317 END TEST hugepages 00:04:46.317 ************************************ 00:04:46.317 23:48:20 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.317 23:48:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.317 23:48:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.317 23:48:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 ************************************ 00:04:46.317 START TEST driver 00:04:46.317 ************************************ 00:04:46.317 23:48:20 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:46.317 * Looking for test storage... 00:04:46.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:46.317 23:48:20 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:46.317 23:48:20 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:46.317 23:48:20 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.851 23:48:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:48.851 23:48:22 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.851 23:48:22 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.851 23:48:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:48.851 ************************************ 00:04:48.851 START TEST guess_driver 00:04:48.851 ************************************ 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:48.851 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:48.851 Looking for driver=vfio-pci 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.851 23:48:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.786 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.787 23:48:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.767 23:48:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.324 00:04:53.324 real 0m4.286s 00:04:53.324 user 0m0.982s 00:04:53.324 sys 0m1.631s 00:04:53.324 23:48:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.324 23:48:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.324 ************************************ 00:04:53.324 END TEST guess_driver 00:04:53.324 ************************************ 00:04:53.324 00:04:53.324 real 0m6.542s 00:04:53.324 user 0m1.501s 00:04:53.324 sys 0m2.510s 00:04:53.324 23:48:27 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.324 23:48:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.324 ************************************ 00:04:53.324 END TEST driver 00:04:53.324 ************************************ 00:04:53.324 23:48:27 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.324 23:48:27 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.324 23:48:27 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.324 23:48:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.324 ************************************ 00:04:53.324 START TEST devices 00:04:53.324 ************************************ 00:04:53.324 23:48:27 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.324 * Looking for test storage... 00:04:53.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:53.324 23:48:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.324 23:48:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.324 23:48:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.324 23:48:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:54.261 23:48:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.261 No valid GPT data, bailing 00:04:54.261 23:48:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:54.261 23:48:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.261 23:48:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.261 23:48:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.261 ************************************ 00:04:54.261 START TEST nvme_mount 00:04:54.261 ************************************ 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.261 23:48:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:55.199 Creating new GPT entries in memory. 00:04:55.199 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.199 other utilities. 00:04:55.199 23:48:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.199 23:48:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.199 23:48:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.199 23:48:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.199 23:48:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:56.577 Creating new GPT entries in memory. 00:04:56.577 The operation has completed successfully. 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1151602 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.577 23:48:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:57.512 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:57.513 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.513 23:48:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.771 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:57.771 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:57.771 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:57.771 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:57.771 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.772 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.772 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:57.772 23:48:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.772 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.772 23:48:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:58.712 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.975 23:48:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.910 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.911 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.911 00:04:59.911 real 0m5.691s 00:04:59.911 user 0m1.325s 00:04:59.911 sys 0m2.070s 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.911 23:48:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:59.911 ************************************ 00:04:59.911 END TEST nvme_mount 00:04:59.911 ************************************ 00:04:59.911 23:48:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:59.911 23:48:34 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.911 23:48:34 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.911 23:48:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.911 ************************************ 00:04:59.911 START TEST dm_mount 00:04:59.911 ************************************ 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.911 23:48:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:01.291 Creating new GPT entries in memory. 00:05:01.291 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:01.291 other utilities. 00:05:01.291 23:48:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:01.291 23:48:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.291 23:48:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.291 23:48:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.291 23:48:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:02.228 Creating new GPT entries in memory. 00:05:02.228 The operation has completed successfully. 00:05:02.228 23:48:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.228 23:48:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.228 23:48:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.228 23:48:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.228 23:48:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:03.168 The operation has completed successfully. 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1153371 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.168 23:48:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.106 23:48:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.043 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:05:05.044 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:05.303 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:05.303 00:05:05.303 real 0m5.248s 00:05:05.303 user 0m0.843s 00:05:05.303 sys 0m1.338s 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.303 23:48:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.303 ************************************ 00:05:05.303 END TEST dm_mount 00:05:05.303 ************************************ 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.303 23:48:39 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.564 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:05.564 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:05.564 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.564 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.564 23:48:39 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:05.564 00:05:05.564 real 0m12.654s 00:05:05.564 user 0m2.747s 00:05:05.564 sys 0m4.364s 00:05:05.564 23:48:39 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.564 23:48:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 ************************************ 00:05:05.564 END TEST devices 00:05:05.564 ************************************ 00:05:05.564 00:05:05.564 real 0m38.216s 00:05:05.564 user 0m11.134s 00:05:05.564 sys 0m16.731s 00:05:05.564 23:48:39 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.564 23:48:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 ************************************ 00:05:05.564 END TEST setup.sh 00:05:05.564 ************************************ 00:05:05.564 23:48:40 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:06.529 Hugepages 00:05:06.529 node hugesize free / total 00:05:06.529 node0 1048576kB 0 / 0 00:05:06.529 node0 2048kB 2048 / 2048 00:05:06.529 node1 1048576kB 0 / 0 00:05:06.529 node1 2048kB 0 / 0 00:05:06.529 00:05:06.529 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.529 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:05:06.529 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:05:06.529 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:05:06.529 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:05:06.529 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:05:06.529 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:05:06.529 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:05:06.530 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:05:06.530 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:05:06.530 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:05:06.530 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:06.530 23:48:41 -- spdk/autotest.sh@130 -- # uname -s 00:05:06.530 23:48:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:06.530 23:48:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:06.530 23:48:41 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.908 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:07.908 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:07.908 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:08.845 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.845 23:48:43 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:09.790 23:48:44 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:09.790 23:48:44 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:09.790 23:48:44 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.790 23:48:44 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:09.790 23:48:44 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:09.790 23:48:44 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:09.790 23:48:44 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.790 23:48:44 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.790 23:48:44 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:09.790 23:48:44 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:09.790 23:48:44 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:05:09.790 23:48:44 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.728 Waiting for block devices as requested 00:05:10.728 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:05:10.728 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:05:10.728 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:05:10.986 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:05:10.986 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:05:10.986 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:05:10.986 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:05:11.245 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:05:11.245 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:05:11.245 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:05:11.505 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:05:11.505 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:05:11.505 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:05:11.505 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:05:11.765 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:05:11.765 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:05:11.765 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:05:11.765 23:48:46 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:11.765 23:48:46 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1498 -- # grep 0000:84:00.0/nvme/nvme 00:05:11.765 23:48:46 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:05:11.765 23:48:46 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:11.765 23:48:46 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:11.765 23:48:46 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:11.765 23:48:46 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:11.765 23:48:46 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:11.765 23:48:46 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:11.765 23:48:46 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:11.765 23:48:46 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:11.765 23:48:46 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:11.765 23:48:46 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:11.765 23:48:46 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:11.765 23:48:46 -- common/autotest_common.sh@1553 -- # continue 00:05:11.765 23:48:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:11.765 23:48:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.765 23:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:11.765 23:48:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:11.765 23:48:46 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:11.765 23:48:46 -- common/autotest_common.sh@10 -- # set +x 00:05:12.025 23:48:46 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:12.965 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:12.965 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:05:12.965 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:05:13.901 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:14.161 23:48:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:14.161 23:48:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.161 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.161 23:48:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:14.161 23:48:48 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:14.161 23:48:48 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:14.161 23:48:48 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:14.161 23:48:48 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:14.161 23:48:48 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:14.161 23:48:48 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:14.161 23:48:48 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:14.161 23:48:48 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.161 23:48:48 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.161 23:48:48 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:14.161 23:48:48 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:14.161 23:48:48 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:05:14.161 23:48:48 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:14.161 23:48:48 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:05:14.161 23:48:48 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:14.161 23:48:48 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:14.161 23:48:48 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:14.161 23:48:48 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:84:00.0 00:05:14.161 23:48:48 -- common/autotest_common.sh@1588 -- # [[ -z 0000:84:00.0 ]] 00:05:14.161 23:48:48 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1157374 00:05:14.161 23:48:48 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.161 23:48:48 -- common/autotest_common.sh@1594 -- # waitforlisten 1157374 00:05:14.161 23:48:48 -- common/autotest_common.sh@827 -- # '[' -z 1157374 ']' 00:05:14.161 23:48:48 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.161 23:48:48 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.161 23:48:48 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.161 23:48:48 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.161 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:05:14.161 [2024-07-15 23:48:48.546617] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:14.161 [2024-07-15 23:48:48.546727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157374 ] 00:05:14.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.161 [2024-07-15 23:48:48.607205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.419 [2024-07-15 23:48:48.697281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.419 23:48:48 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.419 23:48:48 -- common/autotest_common.sh@860 -- # return 0 00:05:14.419 23:48:48 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:14.419 23:48:48 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:14.419 23:48:48 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:05:17.706 nvme0n1 00:05:17.706 23:48:52 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.966 [2024-07-15 23:48:52.300586] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:17.966 [2024-07-15 23:48:52.300649] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:17.966 request: 00:05:17.966 { 00:05:17.966 "nvme_ctrlr_name": "nvme0", 00:05:17.966 "password": "test", 00:05:17.966 "method": "bdev_nvme_opal_revert", 00:05:17.966 "req_id": 1 00:05:17.966 } 00:05:17.966 Got JSON-RPC error response 00:05:17.966 response: 00:05:17.966 { 00:05:17.966 "code": -32603, 00:05:17.966 "message": "Internal error" 00:05:17.966 } 00:05:17.966 23:48:52 -- common/autotest_common.sh@1600 -- # true 00:05:17.966 23:48:52 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:17.966 23:48:52 -- common/autotest_common.sh@1604 -- # killprocess 1157374 00:05:17.966 23:48:52 -- common/autotest_common.sh@946 -- # '[' -z 1157374 ']' 00:05:17.966 23:48:52 -- common/autotest_common.sh@950 -- # kill -0 1157374 00:05:17.966 23:48:52 -- common/autotest_common.sh@951 -- # uname 00:05:17.966 23:48:52 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.966 23:48:52 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1157374 00:05:17.966 23:48:52 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.966 23:48:52 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.966 23:48:52 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1157374' 00:05:17.966 killing process with pid 1157374 00:05:17.966 23:48:52 -- common/autotest_common.sh@965 -- # kill 1157374 00:05:17.966 23:48:52 -- common/autotest_common.sh@970 -- # wait 1157374 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.966 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:17.967 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:19.874 23:48:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:19.874 23:48:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:19.874 23:48:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.874 23:48:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:19.874 23:48:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:19.874 23:48:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.874 23:48:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 23:48:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:19.874 23:48:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.874 23:48:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.874 23:48:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.874 23:48:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 ************************************ 00:05:19.874 START TEST env 00:05:19.874 ************************************ 00:05:19.874 23:48:53 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:19.874 * Looking for test storage... 00:05:19.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:19.874 23:48:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.874 23:48:54 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.874 23:48:54 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.874 23:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 ************************************ 00:05:19.874 START TEST env_memory 00:05:19.874 ************************************ 00:05:19.874 23:48:54 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:19.874 00:05:19.874 00:05:19.874 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.874 http://cunit.sourceforge.net/ 00:05:19.874 00:05:19.874 00:05:19.874 Suite: memory 00:05:19.874 Test: alloc and free memory map ...[2024-07-15 23:48:54.107886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:19.874 passed 00:05:19.874 Test: mem map translation ...[2024-07-15 23:48:54.139119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:19.874 [2024-07-15 23:48:54.139149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:19.874 [2024-07-15 23:48:54.139203] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:19.874 [2024-07-15 23:48:54.139217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:19.874 passed 00:05:19.874 Test: mem map registration ...[2024-07-15 23:48:54.205721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.874 [2024-07-15 23:48:54.205749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.874 passed 00:05:19.874 Test: mem map adjacent registrations ...passed 00:05:19.874 00:05:19.874 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.874 suites 1 1 n/a 0 0 00:05:19.874 tests 4 4 4 0 0 00:05:19.874 asserts 152 152 152 0 n/a 00:05:19.874 00:05:19.874 Elapsed time = 0.221 seconds 00:05:19.874 00:05:19.874 real 0m0.231s 00:05:19.874 user 0m0.223s 00:05:19.874 sys 0m0.006s 00:05:19.874 23:48:54 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.874 23:48:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 ************************************ 00:05:19.874 END TEST env_memory 00:05:19.874 ************************************ 00:05:19.874 23:48:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.874 23:48:54 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.874 23:48:54 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.874 23:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.874 ************************************ 00:05:19.874 START TEST env_vtophys 00:05:19.874 ************************************ 00:05:19.874 23:48:54 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.875 EAL: lib.eal log level changed from notice to debug 00:05:19.875 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.875 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.875 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.875 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.875 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.875 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.875 EAL: Detected lcore 6 as core 6 on socket 0 00:05:19.875 EAL: Detected lcore 7 as core 7 on socket 0 00:05:19.875 EAL: Detected lcore 8 as core 0 on socket 1 00:05:19.875 EAL: Detected lcore 9 as core 1 on socket 1 00:05:19.875 EAL: Detected lcore 10 as core 2 on socket 1 00:05:19.875 EAL: Detected lcore 11 as core 3 on socket 1 00:05:19.875 EAL: Detected lcore 12 as core 4 on socket 1 00:05:19.875 EAL: Detected lcore 13 as core 5 on socket 1 00:05:19.875 EAL: Detected lcore 14 as core 6 on socket 1 00:05:19.875 EAL: Detected lcore 15 as core 7 on socket 1 00:05:19.875 EAL: Detected lcore 16 as core 0 on socket 0 00:05:19.875 EAL: Detected lcore 17 as core 1 on socket 0 00:05:19.875 EAL: Detected lcore 18 as core 2 on socket 0 00:05:19.875 EAL: Detected lcore 19 as core 3 on socket 0 00:05:19.875 EAL: Detected lcore 20 as core 4 on socket 0 00:05:19.875 EAL: Detected lcore 21 as core 5 on socket 0 00:05:19.875 EAL: Detected lcore 22 as core 6 on socket 0 00:05:19.875 EAL: Detected lcore 23 as core 7 on socket 0 00:05:19.875 EAL: Detected lcore 24 as core 0 on socket 1 00:05:19.875 EAL: Detected lcore 25 as core 1 on socket 1 00:05:19.875 EAL: Detected lcore 26 as core 2 on socket 1 00:05:19.875 EAL: Detected lcore 27 as core 3 on socket 1 00:05:19.875 EAL: Detected lcore 28 as core 4 on socket 1 00:05:19.875 EAL: Detected lcore 29 as core 5 on socket 1 00:05:19.875 EAL: Detected lcore 30 as core 6 on socket 1 00:05:19.875 EAL: Detected lcore 31 as core 7 on socket 1 00:05:19.875 EAL: Maximum logical cores by configuration: 128 00:05:19.875 EAL: Detected CPU lcores: 32 00:05:19.875 EAL: Detected NUMA nodes: 2 00:05:19.875 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:19.875 EAL: Detected shared linkage of DPDK 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:19.875 EAL: Registered [vdev] bus. 00:05:19.875 EAL: bus.vdev log level changed from disabled to notice 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:19.875 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:19.875 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:19.875 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:19.875 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.875 EAL: No shared files mode enabled, IPC is disabled 00:05:19.875 EAL: Bus pci wants IOVA as 'DC' 00:05:19.875 EAL: Bus vdev wants IOVA as 'DC' 00:05:19.875 EAL: Buses did not request a specific IOVA mode. 00:05:19.875 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.875 EAL: Selected IOVA mode 'VA' 00:05:19.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.875 EAL: Probing VFIO support... 00:05:19.875 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.875 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.875 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.875 EAL: VFIO support initialized 00:05:19.875 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.875 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.875 EAL: Setting up physically contiguous memory... 00:05:19.875 EAL: Setting maximum number of open files to 524288 00:05:19.875 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.875 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.875 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.875 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.875 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.875 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.875 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.875 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.875 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.875 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.875 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.875 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.875 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.875 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.875 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.875 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.875 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.875 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.875 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.875 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.875 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.134 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.134 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:20.134 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:20.134 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.134 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:20.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.134 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.134 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:20.134 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:20.134 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.134 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:20.134 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.134 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.134 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:20.134 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:20.134 EAL: Hugepages will be freed exactly as allocated. 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: TSC frequency is ~2700000 KHz 00:05:20.134 EAL: Main lcore 0 is ready (tid=7f5477b1ea00;cpuset=[0]) 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 0 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.134 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.134 00:05:20.134 00:05:20.134 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.134 http://cunit.sourceforge.net/ 00:05:20.134 00:05:20.134 00:05:20.134 Suite: components_suite 00:05:20.134 Test: vtophys_malloc_test ...passed 00:05:20.134 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.134 EAL: Trying to obtain current memory policy. 00:05:20.134 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.134 EAL: Restoring previous memory policy: 4 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.134 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.134 EAL: request: mp_malloc_sync 00:05:20.134 EAL: No shared files mode enabled, IPC is disabled 00:05:20.134 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.135 EAL: Trying to obtain current memory policy. 00:05:20.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.135 EAL: Restoring previous memory policy: 4 00:05:20.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.135 EAL: request: mp_malloc_sync 00:05:20.135 EAL: No shared files mode enabled, IPC is disabled 00:05:20.135 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.135 EAL: request: mp_malloc_sync 00:05:20.135 EAL: No shared files mode enabled, IPC is disabled 00:05:20.135 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.135 EAL: Trying to obtain current memory policy. 00:05:20.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.393 EAL: Restoring previous memory policy: 4 00:05:20.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.393 EAL: request: mp_malloc_sync 00:05:20.393 EAL: No shared files mode enabled, IPC is disabled 00:05:20.393 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.393 EAL: request: mp_malloc_sync 00:05:20.393 EAL: No shared files mode enabled, IPC is disabled 00:05:20.393 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.393 EAL: Trying to obtain current memory policy. 00:05:20.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.652 EAL: Restoring previous memory policy: 4 00:05:20.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.652 EAL: request: mp_malloc_sync 00:05:20.652 EAL: No shared files mode enabled, IPC is disabled 00:05:20.652 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.911 EAL: request: mp_malloc_sync 00:05:20.911 EAL: No shared files mode enabled, IPC is disabled 00:05:20.911 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.911 passed 00:05:20.911 00:05:20.911 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.911 suites 1 1 n/a 0 0 00:05:20.911 tests 2 2 2 0 0 00:05:20.911 asserts 497 497 497 0 n/a 00:05:20.911 00:05:20.911 Elapsed time = 0.938 seconds 00:05:20.911 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.911 EAL: request: mp_malloc_sync 00:05:20.911 EAL: No shared files mode enabled, IPC is disabled 00:05:20.911 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.911 EAL: No shared files mode enabled, IPC is disabled 00:05:20.911 EAL: No shared files mode enabled, IPC is disabled 00:05:20.911 EAL: No shared files mode enabled, IPC is disabled 00:05:20.911 00:05:20.911 real 0m1.041s 00:05:20.911 user 0m0.503s 00:05:20.911 sys 0m0.511s 00:05:20.911 23:48:55 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.911 23:48:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:20.911 ************************************ 00:05:20.911 END TEST env_vtophys 00:05:20.911 ************************************ 00:05:20.911 23:48:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.911 23:48:55 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.911 23:48:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.911 23:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.170 ************************************ 00:05:21.170 START TEST env_pci 00:05:21.170 ************************************ 00:05:21.170 23:48:55 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.170 00:05:21.170 00:05:21.170 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.170 http://cunit.sourceforge.net/ 00:05:21.170 00:05:21.170 00:05:21.170 Suite: pci 00:05:21.170 Test: pci_hook ...[2024-07-15 23:48:55.454136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1158064 has claimed it 00:05:21.170 EAL: Cannot find device (10000:00:01.0) 00:05:21.170 EAL: Failed to attach device on primary process 00:05:21.170 passed 00:05:21.170 00:05:21.170 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.170 suites 1 1 n/a 0 0 00:05:21.170 tests 1 1 1 0 0 00:05:21.170 asserts 25 25 25 0 n/a 00:05:21.170 00:05:21.170 Elapsed time = 0.017 seconds 00:05:21.170 00:05:21.170 real 0m0.030s 00:05:21.170 user 0m0.009s 00:05:21.170 sys 0m0.021s 00:05:21.170 23:48:55 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.171 23:48:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:21.171 ************************************ 00:05:21.171 END TEST env_pci 00:05:21.171 ************************************ 00:05:21.171 23:48:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.171 23:48:55 env -- env/env.sh@15 -- # uname 00:05:21.171 23:48:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.171 23:48:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.171 23:48:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.171 23:48:55 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:21.171 23:48:55 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.171 23:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.171 ************************************ 00:05:21.171 START TEST env_dpdk_post_init 00:05:21.171 ************************************ 00:05:21.171 23:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.171 EAL: Detected CPU lcores: 32 00:05:21.171 EAL: Detected NUMA nodes: 2 00:05:21.171 EAL: Detected shared linkage of DPDK 00:05:21.171 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.171 EAL: Selected IOVA mode 'VA' 00:05:21.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.171 EAL: VFIO support initialized 00:05:21.171 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.171 EAL: Using IOMMU type 1 (Type 1) 00:05:21.171 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:05:21.171 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:05:21.171 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:05:21.171 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:05:21.171 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:05:21.431 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:05:22.366 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:05:25.699 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:05:25.700 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:05:25.700 Starting DPDK initialization... 00:05:25.700 Starting SPDK post initialization... 00:05:25.700 SPDK NVMe probe 00:05:25.700 Attaching to 0000:84:00.0 00:05:25.700 Attached to 0000:84:00.0 00:05:25.700 Cleaning up... 00:05:25.700 00:05:25.700 real 0m4.356s 00:05:25.700 user 0m3.240s 00:05:25.700 sys 0m0.181s 00:05:25.700 23:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.700 23:48:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 ************************************ 00:05:25.700 END TEST env_dpdk_post_init 00:05:25.700 ************************************ 00:05:25.700 23:48:59 env -- env/env.sh@26 -- # uname 00:05:25.700 23:48:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.700 23:48:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.700 23:48:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.700 23:48:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.700 23:48:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 ************************************ 00:05:25.700 START TEST env_mem_callbacks 00:05:25.700 ************************************ 00:05:25.700 23:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.700 EAL: Detected CPU lcores: 32 00:05:25.700 EAL: Detected NUMA nodes: 2 00:05:25.700 EAL: Detected shared linkage of DPDK 00:05:25.700 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.700 EAL: Selected IOVA mode 'VA' 00:05:25.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.700 EAL: VFIO support initialized 00:05:25.700 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.700 00:05:25.700 00:05:25.700 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.700 http://cunit.sourceforge.net/ 00:05:25.700 00:05:25.700 00:05:25.700 Suite: memory 00:05:25.700 Test: test ... 00:05:25.700 register 0x200000200000 2097152 00:05:25.700 malloc 3145728 00:05:25.700 register 0x200000400000 4194304 00:05:25.700 buf 0x200000500000 len 3145728 PASSED 00:05:25.700 malloc 64 00:05:25.700 buf 0x2000004fff40 len 64 PASSED 00:05:25.700 malloc 4194304 00:05:25.700 register 0x200000800000 6291456 00:05:25.700 buf 0x200000a00000 len 4194304 PASSED 00:05:25.700 free 0x200000500000 3145728 00:05:25.700 free 0x2000004fff40 64 00:05:25.700 unregister 0x200000400000 4194304 PASSED 00:05:25.700 free 0x200000a00000 4194304 00:05:25.700 unregister 0x200000800000 6291456 PASSED 00:05:25.700 malloc 8388608 00:05:25.700 register 0x200000400000 10485760 00:05:25.700 buf 0x200000600000 len 8388608 PASSED 00:05:25.700 free 0x200000600000 8388608 00:05:25.700 unregister 0x200000400000 10485760 PASSED 00:05:25.700 passed 00:05:25.700 00:05:25.700 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.700 suites 1 1 n/a 0 0 00:05:25.700 tests 1 1 1 0 0 00:05:25.700 asserts 15 15 15 0 n/a 00:05:25.700 00:05:25.700 Elapsed time = 0.006 seconds 00:05:25.700 00:05:25.700 real 0m0.047s 00:05:25.700 user 0m0.011s 00:05:25.700 sys 0m0.035s 00:05:25.700 23:48:59 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.700 23:48:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 ************************************ 00:05:25.700 END TEST env_mem_callbacks 00:05:25.700 ************************************ 00:05:25.700 00:05:25.700 real 0m6.026s 00:05:25.700 user 0m4.123s 00:05:25.700 sys 0m0.958s 00:05:25.700 23:49:00 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.700 23:49:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 ************************************ 00:05:25.700 END TEST env 00:05:25.700 ************************************ 00:05:25.700 23:49:00 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.700 23:49:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.700 23:49:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.700 23:49:00 -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 ************************************ 00:05:25.700 START TEST rpc 00:05:25.700 ************************************ 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.700 * Looking for test storage... 00:05:25.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.700 23:49:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1158594 00:05:25.700 23:49:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.700 23:49:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.700 23:49:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1158594 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@827 -- # '[' -z 1158594 ']' 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.700 23:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.700 [2024-07-15 23:49:00.165909] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:25.700 [2024-07-15 23:49:00.165999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158594 ] 00:05:25.959 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.959 [2024-07-15 23:49:00.226609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.959 [2024-07-15 23:49:00.313974] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.959 [2024-07-15 23:49:00.314029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1158594' to capture a snapshot of events at runtime. 00:05:25.959 [2024-07-15 23:49:00.314044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:25.959 [2024-07-15 23:49:00.314058] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:25.959 [2024-07-15 23:49:00.314069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1158594 for offline analysis/debug. 00:05:25.959 [2024-07-15 23:49:00.314105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.228 23:49:00 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.228 23:49:00 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.228 23:49:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.228 23:49:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.228 23:49:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.228 23:49:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.228 23:49:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.228 23:49:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.228 23:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 ************************************ 00:05:26.228 START TEST rpc_integrity 00:05:26.228 ************************************ 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.228 { 00:05:26.228 "name": "Malloc0", 00:05:26.228 "aliases": [ 00:05:26.228 "6837304d-7263-4d69-b007-cb5cb7a0f718" 00:05:26.228 ], 00:05:26.228 "product_name": "Malloc disk", 00:05:26.228 "block_size": 512, 00:05:26.228 "num_blocks": 16384, 00:05:26.228 "uuid": "6837304d-7263-4d69-b007-cb5cb7a0f718", 00:05:26.228 "assigned_rate_limits": { 00:05:26.228 "rw_ios_per_sec": 0, 00:05:26.228 "rw_mbytes_per_sec": 0, 00:05:26.228 "r_mbytes_per_sec": 0, 00:05:26.228 "w_mbytes_per_sec": 0 00:05:26.228 }, 00:05:26.228 "claimed": false, 00:05:26.228 "zoned": false, 00:05:26.228 "supported_io_types": { 00:05:26.228 "read": true, 00:05:26.228 "write": true, 00:05:26.228 "unmap": true, 00:05:26.228 "write_zeroes": true, 00:05:26.228 "flush": true, 00:05:26.228 "reset": true, 00:05:26.228 "compare": false, 00:05:26.228 "compare_and_write": false, 00:05:26.228 "abort": true, 00:05:26.228 "nvme_admin": false, 00:05:26.228 "nvme_io": false 00:05:26.228 }, 00:05:26.228 "memory_domains": [ 00:05:26.228 { 00:05:26.228 "dma_device_id": "system", 00:05:26.228 "dma_device_type": 1 00:05:26.228 }, 00:05:26.228 { 00:05:26.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.228 "dma_device_type": 2 00:05:26.228 } 00:05:26.228 ], 00:05:26.228 "driver_specific": {} 00:05:26.228 } 00:05:26.228 ]' 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 [2024-07-15 23:49:00.679750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.228 [2024-07-15 23:49:00.679796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.228 [2024-07-15 23:49:00.679819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf285f0 00:05:26.228 [2024-07-15 23:49:00.679833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.228 [2024-07-15 23:49:00.681370] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.228 [2024-07-15 23:49:00.681396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.228 Passthru0 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.228 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.228 { 00:05:26.228 "name": "Malloc0", 00:05:26.228 "aliases": [ 00:05:26.228 "6837304d-7263-4d69-b007-cb5cb7a0f718" 00:05:26.228 ], 00:05:26.228 "product_name": "Malloc disk", 00:05:26.228 "block_size": 512, 00:05:26.228 "num_blocks": 16384, 00:05:26.228 "uuid": "6837304d-7263-4d69-b007-cb5cb7a0f718", 00:05:26.228 "assigned_rate_limits": { 00:05:26.228 "rw_ios_per_sec": 0, 00:05:26.228 "rw_mbytes_per_sec": 0, 00:05:26.228 "r_mbytes_per_sec": 0, 00:05:26.228 "w_mbytes_per_sec": 0 00:05:26.228 }, 00:05:26.228 "claimed": true, 00:05:26.228 "claim_type": "exclusive_write", 00:05:26.228 "zoned": false, 00:05:26.228 "supported_io_types": { 00:05:26.228 "read": true, 00:05:26.228 "write": true, 00:05:26.228 "unmap": true, 00:05:26.228 "write_zeroes": true, 00:05:26.228 "flush": true, 00:05:26.228 "reset": true, 00:05:26.228 "compare": false, 00:05:26.228 "compare_and_write": false, 00:05:26.228 "abort": true, 00:05:26.228 "nvme_admin": false, 00:05:26.228 "nvme_io": false 00:05:26.228 }, 00:05:26.228 "memory_domains": [ 00:05:26.228 { 00:05:26.228 "dma_device_id": "system", 00:05:26.228 "dma_device_type": 1 00:05:26.228 }, 00:05:26.228 { 00:05:26.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.228 "dma_device_type": 2 00:05:26.228 } 00:05:26.228 ], 00:05:26.228 "driver_specific": {} 00:05:26.228 }, 00:05:26.228 { 00:05:26.228 "name": "Passthru0", 00:05:26.228 "aliases": [ 00:05:26.228 "1f7979ff-cb86-50cc-86f0-87da4f1dc337" 00:05:26.228 ], 00:05:26.228 "product_name": "passthru", 00:05:26.228 "block_size": 512, 00:05:26.228 "num_blocks": 16384, 00:05:26.228 "uuid": "1f7979ff-cb86-50cc-86f0-87da4f1dc337", 00:05:26.228 "assigned_rate_limits": { 00:05:26.228 "rw_ios_per_sec": 0, 00:05:26.228 "rw_mbytes_per_sec": 0, 00:05:26.228 "r_mbytes_per_sec": 0, 00:05:26.228 "w_mbytes_per_sec": 0 00:05:26.228 }, 00:05:26.228 "claimed": false, 00:05:26.228 "zoned": false, 00:05:26.228 "supported_io_types": { 00:05:26.228 "read": true, 00:05:26.228 "write": true, 00:05:26.228 "unmap": true, 00:05:26.228 "write_zeroes": true, 00:05:26.228 "flush": true, 00:05:26.228 "reset": true, 00:05:26.228 "compare": false, 00:05:26.228 "compare_and_write": false, 00:05:26.228 "abort": true, 00:05:26.228 "nvme_admin": false, 00:05:26.228 "nvme_io": false 00:05:26.228 }, 00:05:26.228 "memory_domains": [ 00:05:26.228 { 00:05:26.228 "dma_device_id": "system", 00:05:26.228 "dma_device_type": 1 00:05:26.228 }, 00:05:26.228 { 00:05:26.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.228 "dma_device_type": 2 00:05:26.228 } 00:05:26.228 ], 00:05:26.228 "driver_specific": { 00:05:26.228 "passthru": { 00:05:26.228 "name": "Passthru0", 00:05:26.228 "base_bdev_name": "Malloc0" 00:05:26.228 } 00:05:26.228 } 00:05:26.228 } 00:05:26.228 ]' 00:05:26.228 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:26.490 23:49:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.490 00:05:26.490 real 0m0.261s 00:05:26.490 user 0m0.168s 00:05:26.490 sys 0m0.022s 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.490 23:49:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.490 ************************************ 00:05:26.490 END TEST rpc_integrity 00:05:26.490 ************************************ 00:05:26.490 23:49:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.490 23:49:00 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.490 23:49:00 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.490 23:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.490 ************************************ 00:05:26.490 START TEST rpc_plugins 00:05:26.490 ************************************ 00:05:26.490 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:26.490 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.491 { 00:05:26.491 "name": "Malloc1", 00:05:26.491 "aliases": [ 00:05:26.491 "6ad90519-82f3-4b5d-a76f-85ec81712ab6" 00:05:26.491 ], 00:05:26.491 "product_name": "Malloc disk", 00:05:26.491 "block_size": 4096, 00:05:26.491 "num_blocks": 256, 00:05:26.491 "uuid": "6ad90519-82f3-4b5d-a76f-85ec81712ab6", 00:05:26.491 "assigned_rate_limits": { 00:05:26.491 "rw_ios_per_sec": 0, 00:05:26.491 "rw_mbytes_per_sec": 0, 00:05:26.491 "r_mbytes_per_sec": 0, 00:05:26.491 "w_mbytes_per_sec": 0 00:05:26.491 }, 00:05:26.491 "claimed": false, 00:05:26.491 "zoned": false, 00:05:26.491 "supported_io_types": { 00:05:26.491 "read": true, 00:05:26.491 "write": true, 00:05:26.491 "unmap": true, 00:05:26.491 "write_zeroes": true, 00:05:26.491 "flush": true, 00:05:26.491 "reset": true, 00:05:26.491 "compare": false, 00:05:26.491 "compare_and_write": false, 00:05:26.491 "abort": true, 00:05:26.491 "nvme_admin": false, 00:05:26.491 "nvme_io": false 00:05:26.491 }, 00:05:26.491 "memory_domains": [ 00:05:26.491 { 00:05:26.491 "dma_device_id": "system", 00:05:26.491 "dma_device_type": 1 00:05:26.491 }, 00:05:26.491 { 00:05:26.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.491 "dma_device_type": 2 00:05:26.491 } 00:05:26.491 ], 00:05:26.491 "driver_specific": {} 00:05:26.491 } 00:05:26.491 ]' 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:26.491 23:49:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.491 00:05:26.491 real 0m0.129s 00:05:26.491 user 0m0.083s 00:05:26.491 sys 0m0.010s 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.491 23:49:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:26.491 ************************************ 00:05:26.491 END TEST rpc_plugins 00:05:26.491 ************************************ 00:05:26.748 23:49:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.748 23:49:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.748 23:49:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.748 23:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.748 ************************************ 00:05:26.748 START TEST rpc_trace_cmd_test 00:05:26.748 ************************************ 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.748 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:26.748 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1158594", 00:05:26.748 "tpoint_group_mask": "0x8", 00:05:26.748 "iscsi_conn": { 00:05:26.748 "mask": "0x2", 00:05:26.748 "tpoint_mask": "0x0" 00:05:26.748 }, 00:05:26.748 "scsi": { 00:05:26.748 "mask": "0x4", 00:05:26.748 "tpoint_mask": "0x0" 00:05:26.748 }, 00:05:26.748 "bdev": { 00:05:26.748 "mask": "0x8", 00:05:26.748 "tpoint_mask": "0xffffffffffffffff" 00:05:26.748 }, 00:05:26.748 "nvmf_rdma": { 00:05:26.748 "mask": "0x10", 00:05:26.748 "tpoint_mask": "0x0" 00:05:26.748 }, 00:05:26.748 "nvmf_tcp": { 00:05:26.748 "mask": "0x20", 00:05:26.748 "tpoint_mask": "0x0" 00:05:26.748 }, 00:05:26.748 "ftl": { 00:05:26.748 "mask": "0x40", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "blobfs": { 00:05:26.749 "mask": "0x80", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "dsa": { 00:05:26.749 "mask": "0x200", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "thread": { 00:05:26.749 "mask": "0x400", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "nvme_pcie": { 00:05:26.749 "mask": "0x800", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "iaa": { 00:05:26.749 "mask": "0x1000", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "nvme_tcp": { 00:05:26.749 "mask": "0x2000", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "bdev_nvme": { 00:05:26.749 "mask": "0x4000", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 }, 00:05:26.749 "sock": { 00:05:26.749 "mask": "0x8000", 00:05:26.749 "tpoint_mask": "0x0" 00:05:26.749 } 00:05:26.749 }' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.749 00:05:26.749 real 0m0.216s 00:05:26.749 user 0m0.183s 00:05:26.749 sys 0m0.022s 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.749 23:49:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:26.749 ************************************ 00:05:26.749 END TEST rpc_trace_cmd_test 00:05:26.749 ************************************ 00:05:27.008 23:49:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.008 23:49:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.008 23:49:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.008 23:49:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.008 23:49:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.008 23:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 ************************************ 00:05:27.008 START TEST rpc_daemon_integrity 00:05:27.008 ************************************ 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.008 { 00:05:27.008 "name": "Malloc2", 00:05:27.008 "aliases": [ 00:05:27.008 "28522a08-2dff-4958-8efb-d9189fb4a935" 00:05:27.008 ], 00:05:27.008 "product_name": "Malloc disk", 00:05:27.008 "block_size": 512, 00:05:27.008 "num_blocks": 16384, 00:05:27.008 "uuid": "28522a08-2dff-4958-8efb-d9189fb4a935", 00:05:27.008 "assigned_rate_limits": { 00:05:27.008 "rw_ios_per_sec": 0, 00:05:27.008 "rw_mbytes_per_sec": 0, 00:05:27.008 "r_mbytes_per_sec": 0, 00:05:27.008 "w_mbytes_per_sec": 0 00:05:27.008 }, 00:05:27.008 "claimed": false, 00:05:27.008 "zoned": false, 00:05:27.008 "supported_io_types": { 00:05:27.008 "read": true, 00:05:27.008 "write": true, 00:05:27.008 "unmap": true, 00:05:27.008 "write_zeroes": true, 00:05:27.008 "flush": true, 00:05:27.008 "reset": true, 00:05:27.008 "compare": false, 00:05:27.008 "compare_and_write": false, 00:05:27.008 "abort": true, 00:05:27.008 "nvme_admin": false, 00:05:27.008 "nvme_io": false 00:05:27.008 }, 00:05:27.008 "memory_domains": [ 00:05:27.008 { 00:05:27.008 "dma_device_id": "system", 00:05:27.008 "dma_device_type": 1 00:05:27.008 }, 00:05:27.008 { 00:05:27.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.008 "dma_device_type": 2 00:05:27.008 } 00:05:27.008 ], 00:05:27.008 "driver_specific": {} 00:05:27.008 } 00:05:27.008 ]' 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 [2024-07-15 23:49:01.425899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.008 [2024-07-15 23:49:01.425942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.008 [2024-07-15 23:49:01.425970] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd762f0 00:05:27.008 [2024-07-15 23:49:01.425985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.008 [2024-07-15 23:49:01.427498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.008 [2024-07-15 23:49:01.427524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.008 Passthru0 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.008 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.008 { 00:05:27.008 "name": "Malloc2", 00:05:27.008 "aliases": [ 00:05:27.008 "28522a08-2dff-4958-8efb-d9189fb4a935" 00:05:27.008 ], 00:05:27.008 "product_name": "Malloc disk", 00:05:27.008 "block_size": 512, 00:05:27.008 "num_blocks": 16384, 00:05:27.008 "uuid": "28522a08-2dff-4958-8efb-d9189fb4a935", 00:05:27.008 "assigned_rate_limits": { 00:05:27.008 "rw_ios_per_sec": 0, 00:05:27.008 "rw_mbytes_per_sec": 0, 00:05:27.008 "r_mbytes_per_sec": 0, 00:05:27.008 "w_mbytes_per_sec": 0 00:05:27.008 }, 00:05:27.008 "claimed": true, 00:05:27.008 "claim_type": "exclusive_write", 00:05:27.008 "zoned": false, 00:05:27.008 "supported_io_types": { 00:05:27.008 "read": true, 00:05:27.008 "write": true, 00:05:27.008 "unmap": true, 00:05:27.008 "write_zeroes": true, 00:05:27.008 "flush": true, 00:05:27.008 "reset": true, 00:05:27.008 "compare": false, 00:05:27.008 "compare_and_write": false, 00:05:27.008 "abort": true, 00:05:27.008 "nvme_admin": false, 00:05:27.008 "nvme_io": false 00:05:27.008 }, 00:05:27.008 "memory_domains": [ 00:05:27.008 { 00:05:27.008 "dma_device_id": "system", 00:05:27.008 "dma_device_type": 1 00:05:27.008 }, 00:05:27.008 { 00:05:27.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.008 "dma_device_type": 2 00:05:27.008 } 00:05:27.008 ], 00:05:27.008 "driver_specific": {} 00:05:27.008 }, 00:05:27.008 { 00:05:27.008 "name": "Passthru0", 00:05:27.008 "aliases": [ 00:05:27.008 "5da1a728-df3a-5d40-9306-34e0a38c1084" 00:05:27.008 ], 00:05:27.008 "product_name": "passthru", 00:05:27.008 "block_size": 512, 00:05:27.008 "num_blocks": 16384, 00:05:27.008 "uuid": "5da1a728-df3a-5d40-9306-34e0a38c1084", 00:05:27.008 "assigned_rate_limits": { 00:05:27.008 "rw_ios_per_sec": 0, 00:05:27.008 "rw_mbytes_per_sec": 0, 00:05:27.008 "r_mbytes_per_sec": 0, 00:05:27.008 "w_mbytes_per_sec": 0 00:05:27.008 }, 00:05:27.008 "claimed": false, 00:05:27.008 "zoned": false, 00:05:27.008 "supported_io_types": { 00:05:27.008 "read": true, 00:05:27.008 "write": true, 00:05:27.009 "unmap": true, 00:05:27.009 "write_zeroes": true, 00:05:27.009 "flush": true, 00:05:27.009 "reset": true, 00:05:27.009 "compare": false, 00:05:27.009 "compare_and_write": false, 00:05:27.009 "abort": true, 00:05:27.009 "nvme_admin": false, 00:05:27.009 "nvme_io": false 00:05:27.009 }, 00:05:27.009 "memory_domains": [ 00:05:27.009 { 00:05:27.009 "dma_device_id": "system", 00:05:27.009 "dma_device_type": 1 00:05:27.009 }, 00:05:27.009 { 00:05:27.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.009 "dma_device_type": 2 00:05:27.009 } 00:05:27.009 ], 00:05:27.009 "driver_specific": { 00:05:27.009 "passthru": { 00:05:27.009 "name": "Passthru0", 00:05:27.009 "base_bdev_name": "Malloc2" 00:05:27.009 } 00:05:27.009 } 00:05:27.009 } 00:05:27.009 ]' 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.009 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.268 23:49:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.268 00:05:27.268 real 0m0.245s 00:05:27.268 user 0m0.160s 00:05:27.268 sys 0m0.026s 00:05:27.268 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.268 23:49:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.268 ************************************ 00:05:27.268 END TEST rpc_daemon_integrity 00:05:27.268 ************************************ 00:05:27.268 23:49:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.268 23:49:01 rpc -- rpc/rpc.sh@84 -- # killprocess 1158594 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@946 -- # '[' -z 1158594 ']' 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@950 -- # kill -0 1158594 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@951 -- # uname 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1158594 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1158594' 00:05:27.268 killing process with pid 1158594 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@965 -- # kill 1158594 00:05:27.268 23:49:01 rpc -- common/autotest_common.sh@970 -- # wait 1158594 00:05:27.526 00:05:27.526 real 0m1.814s 00:05:27.526 user 0m2.432s 00:05:27.526 sys 0m0.555s 00:05:27.526 23:49:01 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.526 23:49:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 ************************************ 00:05:27.526 END TEST rpc 00:05:27.526 ************************************ 00:05:27.526 23:49:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.526 23:49:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.526 23:49:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.526 23:49:01 -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 ************************************ 00:05:27.526 START TEST skip_rpc 00:05:27.526 ************************************ 00:05:27.527 23:49:01 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:27.527 * Looking for test storage... 00:05:27.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.527 23:49:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.527 23:49:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:27.527 23:49:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:27.527 23:49:01 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.527 23:49:01 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.527 23:49:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.527 ************************************ 00:05:27.527 START TEST skip_rpc 00:05:27.527 ************************************ 00:05:27.527 23:49:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:27.527 23:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1158966 00:05:27.527 23:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:27.527 23:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.527 23:49:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:27.786 [2024-07-15 23:49:02.056592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:27.786 [2024-07-15 23:49:02.056676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158966 ] 00:05:27.786 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.786 [2024-07-15 23:49:02.115426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.786 [2024-07-15 23:49:02.202615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1158966 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1158966 ']' 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1158966 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1158966 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1158966' 00:05:33.048 killing process with pid 1158966 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1158966 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1158966 00:05:33.048 00:05:33.048 real 0m5.304s 00:05:33.048 user 0m5.030s 00:05:33.048 sys 0m0.272s 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.048 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.048 ************************************ 00:05:33.048 END TEST skip_rpc 00:05:33.048 ************************************ 00:05:33.048 23:49:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:33.048 23:49:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:33.048 23:49:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.048 23:49:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.048 ************************************ 00:05:33.048 START TEST skip_rpc_with_json 00:05:33.048 ************************************ 00:05:33.048 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:33.048 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:33.048 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1159491 00:05:33.048 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.048 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1159491 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1159491 ']' 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:33.049 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.049 [2024-07-15 23:49:07.420089] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:33.049 [2024-07-15 23:49:07.420206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159491 ] 00:05:33.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.049 [2024-07-15 23:49:07.479887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.307 [2024-07-15 23:49:07.569445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.307 [2024-07-15 23:49:07.795738] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:33.307 request: 00:05:33.307 { 00:05:33.307 "trtype": "tcp", 00:05:33.307 "method": "nvmf_get_transports", 00:05:33.307 "req_id": 1 00:05:33.307 } 00:05:33.307 Got JSON-RPC error response 00:05:33.307 response: 00:05:33.307 { 00:05:33.307 "code": -19, 00:05:33.307 "message": "No such device" 00:05:33.307 } 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.307 [2024-07-15 23:49:07.803859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.307 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:33.566 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.566 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.566 { 00:05:33.566 "subsystems": [ 00:05:33.566 { 00:05:33.566 "subsystem": "vfio_user_target", 00:05:33.566 "config": null 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "keyring", 00:05:33.566 "config": [] 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "iobuf", 00:05:33.566 "config": [ 00:05:33.566 { 00:05:33.566 "method": "iobuf_set_options", 00:05:33.566 "params": { 00:05:33.566 "small_pool_count": 8192, 00:05:33.566 "large_pool_count": 1024, 00:05:33.566 "small_bufsize": 8192, 00:05:33.566 "large_bufsize": 135168 00:05:33.566 } 00:05:33.566 } 00:05:33.566 ] 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "sock", 00:05:33.566 "config": [ 00:05:33.566 { 00:05:33.566 "method": "sock_set_default_impl", 00:05:33.566 "params": { 00:05:33.566 "impl_name": "posix" 00:05:33.566 } 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "method": "sock_impl_set_options", 00:05:33.566 "params": { 00:05:33.566 "impl_name": "ssl", 00:05:33.566 "recv_buf_size": 4096, 00:05:33.566 "send_buf_size": 4096, 00:05:33.566 "enable_recv_pipe": true, 00:05:33.566 "enable_quickack": false, 00:05:33.566 "enable_placement_id": 0, 00:05:33.566 "enable_zerocopy_send_server": true, 00:05:33.566 "enable_zerocopy_send_client": false, 00:05:33.566 "zerocopy_threshold": 0, 00:05:33.566 "tls_version": 0, 00:05:33.566 "enable_ktls": false 00:05:33.566 } 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "method": "sock_impl_set_options", 00:05:33.566 "params": { 00:05:33.566 "impl_name": "posix", 00:05:33.566 "recv_buf_size": 2097152, 00:05:33.566 "send_buf_size": 2097152, 00:05:33.566 "enable_recv_pipe": true, 00:05:33.566 "enable_quickack": false, 00:05:33.566 "enable_placement_id": 0, 00:05:33.566 "enable_zerocopy_send_server": true, 00:05:33.566 "enable_zerocopy_send_client": false, 00:05:33.566 "zerocopy_threshold": 0, 00:05:33.566 "tls_version": 0, 00:05:33.566 "enable_ktls": false 00:05:33.566 } 00:05:33.566 } 00:05:33.566 ] 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "vmd", 00:05:33.566 "config": [] 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "accel", 00:05:33.566 "config": [ 00:05:33.566 { 00:05:33.566 "method": "accel_set_options", 00:05:33.566 "params": { 00:05:33.566 "small_cache_size": 128, 00:05:33.566 "large_cache_size": 16, 00:05:33.566 "task_count": 2048, 00:05:33.566 "sequence_count": 2048, 00:05:33.566 "buf_count": 2048 00:05:33.566 } 00:05:33.566 } 00:05:33.566 ] 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "subsystem": "bdev", 00:05:33.566 "config": [ 00:05:33.566 { 00:05:33.566 "method": "bdev_set_options", 00:05:33.566 "params": { 00:05:33.566 "bdev_io_pool_size": 65535, 00:05:33.566 "bdev_io_cache_size": 256, 00:05:33.566 "bdev_auto_examine": true, 00:05:33.566 "iobuf_small_cache_size": 128, 00:05:33.566 "iobuf_large_cache_size": 16 00:05:33.566 } 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "method": "bdev_raid_set_options", 00:05:33.566 "params": { 00:05:33.566 "process_window_size_kb": 1024 00:05:33.566 } 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "method": "bdev_iscsi_set_options", 00:05:33.566 "params": { 00:05:33.566 "timeout_sec": 30 00:05:33.566 } 00:05:33.566 }, 00:05:33.566 { 00:05:33.566 "method": "bdev_nvme_set_options", 00:05:33.566 "params": { 00:05:33.566 "action_on_timeout": "none", 00:05:33.566 "timeout_us": 0, 00:05:33.566 "timeout_admin_us": 0, 00:05:33.566 "keep_alive_timeout_ms": 10000, 00:05:33.566 "arbitration_burst": 0, 00:05:33.566 "low_priority_weight": 0, 00:05:33.566 "medium_priority_weight": 0, 00:05:33.566 "high_priority_weight": 0, 00:05:33.566 "nvme_adminq_poll_period_us": 10000, 00:05:33.566 "nvme_ioq_poll_period_us": 0, 00:05:33.566 "io_queue_requests": 0, 00:05:33.566 "delay_cmd_submit": true, 00:05:33.566 "transport_retry_count": 4, 00:05:33.566 "bdev_retry_count": 3, 00:05:33.566 "transport_ack_timeout": 0, 00:05:33.566 "ctrlr_loss_timeout_sec": 0, 00:05:33.566 "reconnect_delay_sec": 0, 00:05:33.566 "fast_io_fail_timeout_sec": 0, 00:05:33.566 "disable_auto_failback": false, 00:05:33.566 "generate_uuids": false, 00:05:33.566 "transport_tos": 0, 00:05:33.566 "nvme_error_stat": false, 00:05:33.566 "rdma_srq_size": 0, 00:05:33.566 "io_path_stat": false, 00:05:33.566 "allow_accel_sequence": false, 00:05:33.566 "rdma_max_cq_size": 0, 00:05:33.566 "rdma_cm_event_timeout_ms": 0, 00:05:33.566 "dhchap_digests": [ 00:05:33.566 "sha256", 00:05:33.566 "sha384", 00:05:33.566 "sha512" 00:05:33.566 ], 00:05:33.566 "dhchap_dhgroups": [ 00:05:33.566 "null", 00:05:33.566 "ffdhe2048", 00:05:33.567 "ffdhe3072", 00:05:33.567 "ffdhe4096", 00:05:33.567 "ffdhe6144", 00:05:33.567 "ffdhe8192" 00:05:33.567 ] 00:05:33.567 } 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "method": "bdev_nvme_set_hotplug", 00:05:33.567 "params": { 00:05:33.567 "period_us": 100000, 00:05:33.567 "enable": false 00:05:33.567 } 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "method": "bdev_wait_for_examine" 00:05:33.567 } 00:05:33.567 ] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "scsi", 00:05:33.567 "config": null 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "scheduler", 00:05:33.567 "config": [ 00:05:33.567 { 00:05:33.567 "method": "framework_set_scheduler", 00:05:33.567 "params": { 00:05:33.567 "name": "static" 00:05:33.567 } 00:05:33.567 } 00:05:33.567 ] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "vhost_scsi", 00:05:33.567 "config": [] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "vhost_blk", 00:05:33.567 "config": [] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "ublk", 00:05:33.567 "config": [] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "nbd", 00:05:33.567 "config": [] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "nvmf", 00:05:33.567 "config": [ 00:05:33.567 { 00:05:33.567 "method": "nvmf_set_config", 00:05:33.567 "params": { 00:05:33.567 "discovery_filter": "match_any", 00:05:33.567 "admin_cmd_passthru": { 00:05:33.567 "identify_ctrlr": false 00:05:33.567 } 00:05:33.567 } 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "method": "nvmf_set_max_subsystems", 00:05:33.567 "params": { 00:05:33.567 "max_subsystems": 1024 00:05:33.567 } 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "method": "nvmf_set_crdt", 00:05:33.567 "params": { 00:05:33.567 "crdt1": 0, 00:05:33.567 "crdt2": 0, 00:05:33.567 "crdt3": 0 00:05:33.567 } 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "method": "nvmf_create_transport", 00:05:33.567 "params": { 00:05:33.567 "trtype": "TCP", 00:05:33.567 "max_queue_depth": 128, 00:05:33.567 "max_io_qpairs_per_ctrlr": 127, 00:05:33.567 "in_capsule_data_size": 4096, 00:05:33.567 "max_io_size": 131072, 00:05:33.567 "io_unit_size": 131072, 00:05:33.567 "max_aq_depth": 128, 00:05:33.567 "num_shared_buffers": 511, 00:05:33.567 "buf_cache_size": 4294967295, 00:05:33.567 "dif_insert_or_strip": false, 00:05:33.567 "zcopy": false, 00:05:33.567 "c2h_success": true, 00:05:33.567 "sock_priority": 0, 00:05:33.567 "abort_timeout_sec": 1, 00:05:33.567 "ack_timeout": 0, 00:05:33.567 "data_wr_pool_size": 0 00:05:33.567 } 00:05:33.567 } 00:05:33.567 ] 00:05:33.567 }, 00:05:33.567 { 00:05:33.567 "subsystem": "iscsi", 00:05:33.567 "config": [ 00:05:33.567 { 00:05:33.567 "method": "iscsi_set_options", 00:05:33.567 "params": { 00:05:33.567 "node_base": "iqn.2016-06.io.spdk", 00:05:33.567 "max_sessions": 128, 00:05:33.567 "max_connections_per_session": 2, 00:05:33.567 "max_queue_depth": 64, 00:05:33.567 "default_time2wait": 2, 00:05:33.567 "default_time2retain": 20, 00:05:33.567 "first_burst_length": 8192, 00:05:33.567 "immediate_data": true, 00:05:33.567 "allow_duplicated_isid": false, 00:05:33.567 "error_recovery_level": 0, 00:05:33.567 "nop_timeout": 60, 00:05:33.567 "nop_in_interval": 30, 00:05:33.567 "disable_chap": false, 00:05:33.567 "require_chap": false, 00:05:33.567 "mutual_chap": false, 00:05:33.567 "chap_group": 0, 00:05:33.567 "max_large_datain_per_connection": 64, 00:05:33.567 "max_r2t_per_connection": 4, 00:05:33.567 "pdu_pool_size": 36864, 00:05:33.567 "immediate_data_pool_size": 16384, 00:05:33.567 "data_out_pool_size": 2048 00:05:33.567 } 00:05:33.567 } 00:05:33.567 ] 00:05:33.567 } 00:05:33.567 ] 00:05:33.567 } 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1159491 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1159491 ']' 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1159491 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1159491 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1159491' 00:05:33.567 killing process with pid 1159491 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1159491 00:05:33.567 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1159491 00:05:33.825 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1159513 00:05:33.825 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:33.825 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1159513 ']' 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1159513' 00:05:39.092 killing process with pid 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1159513 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:39.092 00:05:39.092 real 0m6.201s 00:05:39.092 user 0m5.896s 00:05:39.092 sys 0m0.633s 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.092 23:49:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.092 ************************************ 00:05:39.092 END TEST skip_rpc_with_json 00:05:39.092 ************************************ 00:05:39.092 23:49:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:39.092 23:49:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.092 23:49:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.092 23:49:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 ************************************ 00:05:39.350 START TEST skip_rpc_with_delay 00:05:39.350 ************************************ 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:39.350 [2024-07-15 23:49:13.685065] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:39.350 [2024-07-15 23:49:13.685222] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:39.350 00:05:39.350 real 0m0.083s 00:05:39.350 user 0m0.056s 00:05:39.350 sys 0m0.027s 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.350 23:49:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 ************************************ 00:05:39.350 END TEST skip_rpc_with_delay 00:05:39.350 ************************************ 00:05:39.350 23:49:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:39.350 23:49:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:39.350 23:49:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:39.350 23:49:13 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.350 23:49:13 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.350 23:49:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 ************************************ 00:05:39.350 START TEST exit_on_failed_rpc_init 00:05:39.350 ************************************ 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1160071 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1160071 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1160071 ']' 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.350 23:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 [2024-07-15 23:49:13.815663] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:39.350 [2024-07-15 23:49:13.815763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160071 ] 00:05:39.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.607 [2024-07-15 23:49:13.876032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.607 [2024-07-15 23:49:13.967058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.864 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.864 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:39.865 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:39.865 [2024-07-15 23:49:14.241069] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:39.865 [2024-07-15 23:49:14.241181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160167 ] 00:05:39.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.865 [2024-07-15 23:49:14.300886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.123 [2024-07-15 23:49:14.391781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.123 [2024-07-15 23:49:14.391892] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:40.123 [2024-07-15 23:49:14.391913] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:40.123 [2024-07-15 23:49:14.391927] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1160071 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1160071 ']' 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1160071 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1160071 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1160071' 00:05:40.123 killing process with pid 1160071 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1160071 00:05:40.123 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1160071 00:05:40.381 00:05:40.381 real 0m1.007s 00:05:40.381 user 0m1.209s 00:05:40.381 sys 0m0.389s 00:05:40.381 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.381 23:49:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.381 ************************************ 00:05:40.381 END TEST exit_on_failed_rpc_init 00:05:40.381 ************************************ 00:05:40.381 23:49:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.381 00:05:40.381 real 0m12.872s 00:05:40.381 user 0m12.297s 00:05:40.381 sys 0m1.506s 00:05:40.381 23:49:14 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.381 23:49:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.381 ************************************ 00:05:40.381 END TEST skip_rpc 00:05:40.381 ************************************ 00:05:40.381 23:49:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.381 23:49:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.381 23:49:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.381 23:49:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.381 ************************************ 00:05:40.381 START TEST rpc_client 00:05:40.381 ************************************ 00:05:40.381 23:49:14 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:40.381 * Looking for test storage... 00:05:40.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:40.640 23:49:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:40.640 OK 00:05:40.640 23:49:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:40.640 00:05:40.640 real 0m0.068s 00:05:40.640 user 0m0.025s 00:05:40.640 sys 0m0.047s 00:05:40.640 23:49:14 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.640 23:49:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 ************************************ 00:05:40.640 END TEST rpc_client 00:05:40.640 ************************************ 00:05:40.640 23:49:14 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.640 23:49:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.640 23:49:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.640 23:49:14 -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 ************************************ 00:05:40.640 START TEST json_config 00:05:40.640 ************************************ 00:05:40.640 23:49:14 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:40.640 23:49:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.640 23:49:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.640 23:49:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.640 23:49:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.640 23:49:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.640 23:49:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.640 23:49:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.640 23:49:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.640 23:49:15 json_config -- paths/export.sh@5 -- # export PATH 00:05:40.640 23:49:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@47 -- # : 0 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:40.640 23:49:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:40.640 INFO: JSON configuration test init 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:40.640 23:49:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.640 23:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:40.640 23:49:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:40.640 23:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 23:49:15 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:40.640 23:49:15 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.640 23:49:15 json_config -- json_config/common.sh@10 -- # shift 00:05:40.640 23:49:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.640 23:49:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.640 23:49:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.640 23:49:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.640 23:49:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.640 23:49:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1160370 00:05:40.640 23:49:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.640 Waiting for target to run... 00:05:40.641 23:49:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:40.641 23:49:15 json_config -- json_config/common.sh@25 -- # waitforlisten 1160370 /var/tmp/spdk_tgt.sock 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@827 -- # '[' -z 1160370 ']' 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:40.641 23:49:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.641 [2024-07-15 23:49:15.074939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:40.641 [2024-07-15 23:49:15.075025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160370 ] 00:05:40.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.898 [2024-07-15 23:49:15.383886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.155 [2024-07-15 23:49:15.450834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.720 23:49:16 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.720 23:49:16 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:41.720 23:49:16 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.721 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:41.721 23:49:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.721 23:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:41.721 23:49:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.721 23:49:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:41.721 23:49:16 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:41.721 23:49:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:45.003 23:49:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:45.003 23:49:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:45.003 23:49:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.003 23:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.003 23:49:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:45.004 23:49:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:45.004 23:49:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:45.004 23:49:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:45.004 23:49:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:45.004 23:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:45.262 23:49:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:45.262 23:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:45.262 23:49:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.262 23:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:45.262 23:49:19 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.262 23:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.521 MallocForNvmf0 00:05:45.521 23:49:19 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.521 23:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.779 MallocForNvmf1 00:05:45.779 23:49:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.779 23:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.038 [2024-07-15 23:49:20.502194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.038 23:49:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.038 23:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.323 23:49:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.323 23:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.889 23:49:21 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.890 23:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.148 23:49:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.148 23:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.406 [2024-07-15 23:49:21.689938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.406 23:49:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:47.406 23:49:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.406 23:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.406 23:49:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:47.406 23:49:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.406 23:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.406 23:49:21 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:47.406 23:49:21 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.406 23:49:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.665 MallocBdevForConfigChangeCheck 00:05:47.665 23:49:22 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:47.665 23:49:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.665 23:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.665 23:49:22 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:47.665 23:49:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.284 23:49:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:48.284 INFO: shutting down applications... 00:05:48.284 23:49:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:48.284 23:49:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:48.284 23:49:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:48.284 23:49:22 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:49.663 Calling clear_iscsi_subsystem 00:05:49.663 Calling clear_nvmf_subsystem 00:05:49.663 Calling clear_nbd_subsystem 00:05:49.663 Calling clear_ublk_subsystem 00:05:49.663 Calling clear_vhost_blk_subsystem 00:05:49.663 Calling clear_vhost_scsi_subsystem 00:05:49.663 Calling clear_bdev_subsystem 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.663 23:49:24 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:50.229 23:49:24 json_config -- json_config/json_config.sh@345 -- # break 00:05:50.229 23:49:24 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:50.229 23:49:24 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:50.229 23:49:24 json_config -- json_config/common.sh@31 -- # local app=target 00:05:50.229 23:49:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.229 23:49:24 json_config -- json_config/common.sh@35 -- # [[ -n 1160370 ]] 00:05:50.229 23:49:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1160370 00:05:50.229 23:49:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.229 23:49:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.229 23:49:24 json_config -- json_config/common.sh@41 -- # kill -0 1160370 00:05:50.229 23:49:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.797 23:49:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.797 23:49:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.797 23:49:25 json_config -- json_config/common.sh@41 -- # kill -0 1160370 00:05:50.797 23:49:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.797 23:49:25 json_config -- json_config/common.sh@43 -- # break 00:05:50.797 23:49:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.797 23:49:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.797 SPDK target shutdown done 00:05:50.797 23:49:25 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:50.797 INFO: relaunching applications... 00:05:50.797 23:49:25 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.797 23:49:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.797 23:49:25 json_config -- json_config/common.sh@10 -- # shift 00:05:50.797 23:49:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.797 23:49:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.797 23:49:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.797 23:49:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.797 23:49:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.797 23:49:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1161407 00:05:50.797 23:49:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.797 23:49:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.797 Waiting for target to run... 00:05:50.797 23:49:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1161407 /var/tmp/spdk_tgt.sock 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@827 -- # '[' -z 1161407 ']' 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:50.797 23:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.797 [2024-07-15 23:49:25.135706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:50.797 [2024-07-15 23:49:25.135800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161407 ] 00:05:50.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.055 [2024-07-15 23:49:25.441629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.055 [2024-07-15 23:49:25.507512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.335 [2024-07-15 23:49:28.515043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.335 [2024-07-15 23:49:28.547395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:54.335 23:49:28 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:54.335 23:49:28 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:54.335 23:49:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.335 00:05:54.335 23:49:28 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:54.335 23:49:28 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:54.335 INFO: Checking if target configuration is the same... 00:05:54.335 23:49:28 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.335 23:49:28 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:54.335 23:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.335 + '[' 2 -ne 2 ']' 00:05:54.335 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.335 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.335 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:54.335 +++ basename /dev/fd/62 00:05:54.335 ++ mktemp /tmp/62.XXX 00:05:54.335 + tmp_file_1=/tmp/62.esi 00:05:54.335 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.335 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:54.335 + tmp_file_2=/tmp/spdk_tgt_config.json.e5W 00:05:54.335 + ret=0 00:05:54.335 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.593 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:54.593 + diff -u /tmp/62.esi /tmp/spdk_tgt_config.json.e5W 00:05:54.593 + echo 'INFO: JSON config files are the same' 00:05:54.593 INFO: JSON config files are the same 00:05:54.593 + rm /tmp/62.esi /tmp/spdk_tgt_config.json.e5W 00:05:54.593 + exit 0 00:05:54.593 23:49:29 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:54.593 23:49:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:54.593 INFO: changing configuration and checking if this can be detected... 00:05:54.593 23:49:29 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.593 23:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:54.852 23:49:29 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:54.852 23:49:29 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:54.852 23:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.852 + '[' 2 -ne 2 ']' 00:05:54.852 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:54.852 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:54.852 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:55.110 +++ basename /dev/fd/62 00:05:55.110 ++ mktemp /tmp/62.XXX 00:05:55.110 + tmp_file_1=/tmp/62.ybT 00:05:55.110 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.110 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.110 + tmp_file_2=/tmp/spdk_tgt_config.json.Vvx 00:05:55.110 + ret=0 00:05:55.110 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.368 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:55.368 + diff -u /tmp/62.ybT /tmp/spdk_tgt_config.json.Vvx 00:05:55.368 + ret=1 00:05:55.368 + echo '=== Start of file: /tmp/62.ybT ===' 00:05:55.368 + cat /tmp/62.ybT 00:05:55.368 + echo '=== End of file: /tmp/62.ybT ===' 00:05:55.368 + echo '' 00:05:55.368 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vvx ===' 00:05:55.369 + cat /tmp/spdk_tgt_config.json.Vvx 00:05:55.369 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vvx ===' 00:05:55.369 + echo '' 00:05:55.369 + rm /tmp/62.ybT /tmp/spdk_tgt_config.json.Vvx 00:05:55.369 + exit 1 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:55.369 INFO: configuration change detected. 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@317 -- # [[ -n 1161407 ]] 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:55.369 23:49:29 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.369 23:49:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.627 23:49:29 json_config -- json_config/json_config.sh@323 -- # killprocess 1161407 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@946 -- # '[' -z 1161407 ']' 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@950 -- # kill -0 1161407 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@951 -- # uname 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1161407 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1161407' 00:05:55.627 killing process with pid 1161407 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@965 -- # kill 1161407 00:05:55.627 23:49:29 json_config -- common/autotest_common.sh@970 -- # wait 1161407 00:05:56.998 23:49:31 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.998 23:49:31 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:56.998 23:49:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.998 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.998 23:49:31 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:56.999 23:49:31 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:56.999 INFO: Success 00:05:56.999 00:05:56.999 real 0m16.521s 00:05:56.999 user 0m19.361s 00:05:56.999 sys 0m1.829s 00:05:56.999 23:49:31 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.999 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.999 ************************************ 00:05:56.999 END TEST json_config 00:05:56.999 ************************************ 00:05:56.999 23:49:31 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:56.999 23:49:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:56.999 23:49:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.999 23:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:57.257 ************************************ 00:05:57.257 START TEST json_config_extra_key 00:05:57.257 ************************************ 00:05:57.257 23:49:31 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:57.257 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.257 23:49:31 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.257 23:49:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.257 23:49:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.257 23:49:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.257 23:49:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.257 23:49:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.258 23:49:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.258 23:49:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:57.258 23:49:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.258 23:49:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:57.258 INFO: launching applications... 00:05:57.258 23:49:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1162119 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:57.258 Waiting for target to run... 00:05:57.258 23:49:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1162119 /var/tmp/spdk_tgt.sock 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1162119 ']' 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:57.258 23:49:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.258 [2024-07-15 23:49:31.638158] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:57.258 [2024-07-15 23:49:31.638249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162119 ] 00:05:57.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.539 [2024-07-15 23:49:31.941726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.539 [2024-07-15 23:49:32.007635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.474 23:49:32 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:58.474 23:49:32 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:58.474 00:05:58.474 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:58.474 INFO: shutting down applications... 00:05:58.474 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1162119 ]] 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1162119 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1162119 00:05:58.474 23:49:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1162119 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:58.734 23:49:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:58.734 SPDK target shutdown done 00:05:58.734 23:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:58.734 Success 00:05:58.734 00:05:58.734 real 0m1.644s 00:05:58.734 user 0m1.564s 00:05:58.734 sys 0m0.406s 00:05:58.734 23:49:33 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.734 23:49:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:58.734 ************************************ 00:05:58.734 END TEST json_config_extra_key 00:05:58.734 ************************************ 00:05:58.734 23:49:33 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.734 23:49:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:58.734 23:49:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.734 23:49:33 -- common/autotest_common.sh@10 -- # set +x 00:05:58.734 ************************************ 00:05:58.734 START TEST alias_rpc 00:05:58.734 ************************************ 00:05:58.734 23:49:33 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:58.992 * Looking for test storage... 00:05:58.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:58.992 23:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:58.992 23:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1162279 00:05:58.992 23:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1162279 00:05:58.992 23:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1162279 ']' 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:58.992 23:49:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.992 [2024-07-15 23:49:33.334758] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:58.992 [2024-07-15 23:49:33.334867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162279 ] 00:05:58.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.992 [2024-07-15 23:49:33.397847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.992 [2024-07-15 23:49:33.485180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.249 23:49:33 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.249 23:49:33 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:59.249 23:49:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:59.506 23:49:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1162279 00:05:59.506 23:49:34 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1162279 ']' 00:05:59.506 23:49:34 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1162279 00:05:59.506 23:49:34 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:59.506 23:49:34 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1162279 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1162279' 00:05:59.764 killing process with pid 1162279 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@965 -- # kill 1162279 00:05:59.764 23:49:34 alias_rpc -- common/autotest_common.sh@970 -- # wait 1162279 00:06:00.023 00:06:00.023 real 0m1.089s 00:06:00.023 user 0m1.272s 00:06:00.023 sys 0m0.393s 00:06:00.023 23:49:34 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.023 23:49:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 ************************************ 00:06:00.023 END TEST alias_rpc 00:06:00.023 ************************************ 00:06:00.023 23:49:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:00.023 23:49:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.023 23:49:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.023 23:49:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.023 23:49:34 -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 ************************************ 00:06:00.023 START TEST spdkcli_tcp 00:06:00.023 ************************************ 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:00.023 * Looking for test storage... 00:06:00.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1162432 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:00.023 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1162432 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1162432 ']' 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.023 23:49:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.023 [2024-07-15 23:49:34.475766] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:00.023 [2024-07-15 23:49:34.475856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162432 ] 00:06:00.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.023 [2024-07-15 23:49:34.535961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.283 [2024-07-15 23:49:34.624237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.283 [2024-07-15 23:49:34.624269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.542 23:49:34 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.542 23:49:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:00.542 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1162526 00:06:00.542 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.542 23:49:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.801 [ 00:06:00.801 "bdev_malloc_delete", 00:06:00.801 "bdev_malloc_create", 00:06:00.801 "bdev_null_resize", 00:06:00.801 "bdev_null_delete", 00:06:00.801 "bdev_null_create", 00:06:00.801 "bdev_nvme_cuse_unregister", 00:06:00.801 "bdev_nvme_cuse_register", 00:06:00.801 "bdev_opal_new_user", 00:06:00.801 "bdev_opal_set_lock_state", 00:06:00.801 "bdev_opal_delete", 00:06:00.801 "bdev_opal_get_info", 00:06:00.801 "bdev_opal_create", 00:06:00.801 "bdev_nvme_opal_revert", 00:06:00.801 "bdev_nvme_opal_init", 00:06:00.801 "bdev_nvme_send_cmd", 00:06:00.801 "bdev_nvme_get_path_iostat", 00:06:00.801 "bdev_nvme_get_mdns_discovery_info", 00:06:00.801 "bdev_nvme_stop_mdns_discovery", 00:06:00.801 "bdev_nvme_start_mdns_discovery", 00:06:00.801 "bdev_nvme_set_multipath_policy", 00:06:00.801 "bdev_nvme_set_preferred_path", 00:06:00.801 "bdev_nvme_get_io_paths", 00:06:00.801 "bdev_nvme_remove_error_injection", 00:06:00.801 "bdev_nvme_add_error_injection", 00:06:00.801 "bdev_nvme_get_discovery_info", 00:06:00.801 "bdev_nvme_stop_discovery", 00:06:00.801 "bdev_nvme_start_discovery", 00:06:00.801 "bdev_nvme_get_controller_health_info", 00:06:00.802 "bdev_nvme_disable_controller", 00:06:00.802 "bdev_nvme_enable_controller", 00:06:00.802 "bdev_nvme_reset_controller", 00:06:00.802 "bdev_nvme_get_transport_statistics", 00:06:00.802 "bdev_nvme_apply_firmware", 00:06:00.802 "bdev_nvme_detach_controller", 00:06:00.802 "bdev_nvme_get_controllers", 00:06:00.802 "bdev_nvme_attach_controller", 00:06:00.802 "bdev_nvme_set_hotplug", 00:06:00.802 "bdev_nvme_set_options", 00:06:00.802 "bdev_passthru_delete", 00:06:00.802 "bdev_passthru_create", 00:06:00.802 "bdev_lvol_set_parent_bdev", 00:06:00.802 "bdev_lvol_set_parent", 00:06:00.802 "bdev_lvol_check_shallow_copy", 00:06:00.802 "bdev_lvol_start_shallow_copy", 00:06:00.802 "bdev_lvol_grow_lvstore", 00:06:00.802 "bdev_lvol_get_lvols", 00:06:00.802 "bdev_lvol_get_lvstores", 00:06:00.802 "bdev_lvol_delete", 00:06:00.802 "bdev_lvol_set_read_only", 00:06:00.802 "bdev_lvol_resize", 00:06:00.802 "bdev_lvol_decouple_parent", 00:06:00.802 "bdev_lvol_inflate", 00:06:00.802 "bdev_lvol_rename", 00:06:00.802 "bdev_lvol_clone_bdev", 00:06:00.802 "bdev_lvol_clone", 00:06:00.802 "bdev_lvol_snapshot", 00:06:00.802 "bdev_lvol_create", 00:06:00.802 "bdev_lvol_delete_lvstore", 00:06:00.802 "bdev_lvol_rename_lvstore", 00:06:00.802 "bdev_lvol_create_lvstore", 00:06:00.802 "bdev_raid_set_options", 00:06:00.802 "bdev_raid_remove_base_bdev", 00:06:00.802 "bdev_raid_add_base_bdev", 00:06:00.802 "bdev_raid_delete", 00:06:00.802 "bdev_raid_create", 00:06:00.802 "bdev_raid_get_bdevs", 00:06:00.802 "bdev_error_inject_error", 00:06:00.802 "bdev_error_delete", 00:06:00.802 "bdev_error_create", 00:06:00.802 "bdev_split_delete", 00:06:00.802 "bdev_split_create", 00:06:00.802 "bdev_delay_delete", 00:06:00.802 "bdev_delay_create", 00:06:00.802 "bdev_delay_update_latency", 00:06:00.802 "bdev_zone_block_delete", 00:06:00.802 "bdev_zone_block_create", 00:06:00.802 "blobfs_create", 00:06:00.802 "blobfs_detect", 00:06:00.802 "blobfs_set_cache_size", 00:06:00.802 "bdev_aio_delete", 00:06:00.802 "bdev_aio_rescan", 00:06:00.802 "bdev_aio_create", 00:06:00.802 "bdev_ftl_set_property", 00:06:00.802 "bdev_ftl_get_properties", 00:06:00.802 "bdev_ftl_get_stats", 00:06:00.802 "bdev_ftl_unmap", 00:06:00.802 "bdev_ftl_unload", 00:06:00.802 "bdev_ftl_delete", 00:06:00.802 "bdev_ftl_load", 00:06:00.802 "bdev_ftl_create", 00:06:00.802 "bdev_virtio_attach_controller", 00:06:00.802 "bdev_virtio_scsi_get_devices", 00:06:00.802 "bdev_virtio_detach_controller", 00:06:00.802 "bdev_virtio_blk_set_hotplug", 00:06:00.802 "bdev_iscsi_delete", 00:06:00.802 "bdev_iscsi_create", 00:06:00.802 "bdev_iscsi_set_options", 00:06:00.802 "accel_error_inject_error", 00:06:00.802 "ioat_scan_accel_module", 00:06:00.802 "dsa_scan_accel_module", 00:06:00.802 "iaa_scan_accel_module", 00:06:00.802 "vfu_virtio_create_scsi_endpoint", 00:06:00.802 "vfu_virtio_scsi_remove_target", 00:06:00.802 "vfu_virtio_scsi_add_target", 00:06:00.802 "vfu_virtio_create_blk_endpoint", 00:06:00.802 "vfu_virtio_delete_endpoint", 00:06:00.802 "keyring_file_remove_key", 00:06:00.802 "keyring_file_add_key", 00:06:00.802 "keyring_linux_set_options", 00:06:00.802 "iscsi_get_histogram", 00:06:00.802 "iscsi_enable_histogram", 00:06:00.802 "iscsi_set_options", 00:06:00.802 "iscsi_get_auth_groups", 00:06:00.802 "iscsi_auth_group_remove_secret", 00:06:00.802 "iscsi_auth_group_add_secret", 00:06:00.802 "iscsi_delete_auth_group", 00:06:00.802 "iscsi_create_auth_group", 00:06:00.802 "iscsi_set_discovery_auth", 00:06:00.802 "iscsi_get_options", 00:06:00.802 "iscsi_target_node_request_logout", 00:06:00.802 "iscsi_target_node_set_redirect", 00:06:00.802 "iscsi_target_node_set_auth", 00:06:00.802 "iscsi_target_node_add_lun", 00:06:00.802 "iscsi_get_stats", 00:06:00.802 "iscsi_get_connections", 00:06:00.802 "iscsi_portal_group_set_auth", 00:06:00.802 "iscsi_start_portal_group", 00:06:00.802 "iscsi_delete_portal_group", 00:06:00.802 "iscsi_create_portal_group", 00:06:00.802 "iscsi_get_portal_groups", 00:06:00.802 "iscsi_delete_target_node", 00:06:00.802 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.802 "iscsi_target_node_add_pg_ig_maps", 00:06:00.802 "iscsi_create_target_node", 00:06:00.802 "iscsi_get_target_nodes", 00:06:00.802 "iscsi_delete_initiator_group", 00:06:00.802 "iscsi_initiator_group_remove_initiators", 00:06:00.802 "iscsi_initiator_group_add_initiators", 00:06:00.802 "iscsi_create_initiator_group", 00:06:00.802 "iscsi_get_initiator_groups", 00:06:00.802 "nvmf_set_crdt", 00:06:00.802 "nvmf_set_config", 00:06:00.802 "nvmf_set_max_subsystems", 00:06:00.802 "nvmf_stop_mdns_prr", 00:06:00.802 "nvmf_publish_mdns_prr", 00:06:00.802 "nvmf_subsystem_get_listeners", 00:06:00.802 "nvmf_subsystem_get_qpairs", 00:06:00.802 "nvmf_subsystem_get_controllers", 00:06:00.802 "nvmf_get_stats", 00:06:00.802 "nvmf_get_transports", 00:06:00.802 "nvmf_create_transport", 00:06:00.802 "nvmf_get_targets", 00:06:00.802 "nvmf_delete_target", 00:06:00.802 "nvmf_create_target", 00:06:00.802 "nvmf_subsystem_allow_any_host", 00:06:00.802 "nvmf_subsystem_remove_host", 00:06:00.802 "nvmf_subsystem_add_host", 00:06:00.802 "nvmf_ns_remove_host", 00:06:00.802 "nvmf_ns_add_host", 00:06:00.802 "nvmf_subsystem_remove_ns", 00:06:00.802 "nvmf_subsystem_add_ns", 00:06:00.802 "nvmf_subsystem_listener_set_ana_state", 00:06:00.802 "nvmf_discovery_get_referrals", 00:06:00.802 "nvmf_discovery_remove_referral", 00:06:00.802 "nvmf_discovery_add_referral", 00:06:00.802 "nvmf_subsystem_remove_listener", 00:06:00.802 "nvmf_subsystem_add_listener", 00:06:00.802 "nvmf_delete_subsystem", 00:06:00.802 "nvmf_create_subsystem", 00:06:00.802 "nvmf_get_subsystems", 00:06:00.802 "env_dpdk_get_mem_stats", 00:06:00.802 "nbd_get_disks", 00:06:00.802 "nbd_stop_disk", 00:06:00.802 "nbd_start_disk", 00:06:00.802 "ublk_recover_disk", 00:06:00.802 "ublk_get_disks", 00:06:00.802 "ublk_stop_disk", 00:06:00.802 "ublk_start_disk", 00:06:00.802 "ublk_destroy_target", 00:06:00.802 "ublk_create_target", 00:06:00.802 "virtio_blk_create_transport", 00:06:00.802 "virtio_blk_get_transports", 00:06:00.802 "vhost_controller_set_coalescing", 00:06:00.802 "vhost_get_controllers", 00:06:00.802 "vhost_delete_controller", 00:06:00.802 "vhost_create_blk_controller", 00:06:00.802 "vhost_scsi_controller_remove_target", 00:06:00.802 "vhost_scsi_controller_add_target", 00:06:00.802 "vhost_start_scsi_controller", 00:06:00.802 "vhost_create_scsi_controller", 00:06:00.802 "thread_set_cpumask", 00:06:00.802 "framework_get_scheduler", 00:06:00.802 "framework_set_scheduler", 00:06:00.802 "framework_get_reactors", 00:06:00.802 "thread_get_io_channels", 00:06:00.802 "thread_get_pollers", 00:06:00.802 "thread_get_stats", 00:06:00.802 "framework_monitor_context_switch", 00:06:00.802 "spdk_kill_instance", 00:06:00.802 "log_enable_timestamps", 00:06:00.802 "log_get_flags", 00:06:00.802 "log_clear_flag", 00:06:00.802 "log_set_flag", 00:06:00.802 "log_get_level", 00:06:00.802 "log_set_level", 00:06:00.802 "log_get_print_level", 00:06:00.802 "log_set_print_level", 00:06:00.802 "framework_enable_cpumask_locks", 00:06:00.802 "framework_disable_cpumask_locks", 00:06:00.802 "framework_wait_init", 00:06:00.802 "framework_start_init", 00:06:00.802 "scsi_get_devices", 00:06:00.802 "bdev_get_histogram", 00:06:00.802 "bdev_enable_histogram", 00:06:00.802 "bdev_set_qos_limit", 00:06:00.802 "bdev_set_qd_sampling_period", 00:06:00.802 "bdev_get_bdevs", 00:06:00.802 "bdev_reset_iostat", 00:06:00.802 "bdev_get_iostat", 00:06:00.802 "bdev_examine", 00:06:00.802 "bdev_wait_for_examine", 00:06:00.802 "bdev_set_options", 00:06:00.802 "notify_get_notifications", 00:06:00.802 "notify_get_types", 00:06:00.802 "accel_get_stats", 00:06:00.802 "accel_set_options", 00:06:00.802 "accel_set_driver", 00:06:00.802 "accel_crypto_key_destroy", 00:06:00.802 "accel_crypto_keys_get", 00:06:00.802 "accel_crypto_key_create", 00:06:00.803 "accel_assign_opc", 00:06:00.803 "accel_get_module_info", 00:06:00.803 "accel_get_opc_assignments", 00:06:00.803 "vmd_rescan", 00:06:00.803 "vmd_remove_device", 00:06:00.803 "vmd_enable", 00:06:00.803 "sock_get_default_impl", 00:06:00.803 "sock_set_default_impl", 00:06:00.803 "sock_impl_set_options", 00:06:00.803 "sock_impl_get_options", 00:06:00.803 "iobuf_get_stats", 00:06:00.803 "iobuf_set_options", 00:06:00.803 "keyring_get_keys", 00:06:00.803 "framework_get_pci_devices", 00:06:00.803 "framework_get_config", 00:06:00.803 "framework_get_subsystems", 00:06:00.803 "vfu_tgt_set_base_path", 00:06:00.803 "trace_get_info", 00:06:00.803 "trace_get_tpoint_group_mask", 00:06:00.803 "trace_disable_tpoint_group", 00:06:00.803 "trace_enable_tpoint_group", 00:06:00.803 "trace_clear_tpoint_mask", 00:06:00.803 "trace_set_tpoint_mask", 00:06:00.803 "spdk_get_version", 00:06:00.803 "rpc_get_methods" 00:06:00.803 ] 00:06:00.803 23:49:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.803 23:49:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.803 23:49:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1162432 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1162432 ']' 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1162432 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1162432 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1162432' 00:06:00.803 killing process with pid 1162432 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1162432 00:06:00.803 23:49:35 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1162432 00:06:01.062 00:06:01.062 real 0m1.098s 00:06:01.062 user 0m2.047s 00:06:01.062 sys 0m0.422s 00:06:01.062 23:49:35 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.062 23:49:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.062 ************************************ 00:06:01.062 END TEST spdkcli_tcp 00:06:01.062 ************************************ 00:06:01.062 23:49:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.062 23:49:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.062 23:49:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.062 23:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.062 ************************************ 00:06:01.062 START TEST dpdk_mem_utility 00:06:01.062 ************************************ 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:01.062 * Looking for test storage... 00:06:01.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:01.062 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.062 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1162602 00:06:01.062 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.062 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1162602 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1162602 ']' 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.062 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.320 [2024-07-15 23:49:35.624719] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:01.320 [2024-07-15 23:49:35.624816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162602 ] 00:06:01.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.320 [2024-07-15 23:49:35.687781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.320 [2024-07-15 23:49:35.775021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.578 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.578 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:01.578 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.578 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.578 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.578 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.578 { 00:06:01.578 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.578 } 00:06:01.578 23:49:35 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.578 23:49:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:01.579 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:01.579 1 heaps totaling size 814.000000 MiB 00:06:01.579 size: 814.000000 MiB heap id: 0 00:06:01.579 end heaps---------- 00:06:01.579 8 mempools totaling size 598.116089 MiB 00:06:01.579 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.579 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.579 size: 84.521057 MiB name: bdev_io_1162602 00:06:01.579 size: 51.011292 MiB name: evtpool_1162602 00:06:01.579 size: 50.003479 MiB name: msgpool_1162602 00:06:01.579 size: 21.763794 MiB name: PDU_Pool 00:06:01.579 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.579 size: 0.026123 MiB name: Session_Pool 00:06:01.579 end mempools------- 00:06:01.579 6 memzones totaling size 4.142822 MiB 00:06:01.579 size: 1.000366 MiB name: RG_ring_0_1162602 00:06:01.579 size: 1.000366 MiB name: RG_ring_1_1162602 00:06:01.579 size: 1.000366 MiB name: RG_ring_4_1162602 00:06:01.579 size: 1.000366 MiB name: RG_ring_5_1162602 00:06:01.579 size: 0.125366 MiB name: RG_ring_2_1162602 00:06:01.579 size: 0.015991 MiB name: RG_ring_3_1162602 00:06:01.579 end memzones------- 00:06:01.579 23:49:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.838 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:01.838 list of free elements. size: 12.519348 MiB 00:06:01.838 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:01.838 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:01.838 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:01.838 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:01.838 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:01.838 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:01.838 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:01.838 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:01.838 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:01.838 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:01.838 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:01.838 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:01.838 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:01.838 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:01.838 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:01.838 list of standard malloc elements. size: 199.218079 MiB 00:06:01.838 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:01.838 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:01.838 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:01.838 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:01.838 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:01.838 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:01.838 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:01.838 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:01.838 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:01.838 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:01.838 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:01.838 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:01.838 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:01.838 list of memzone associated elements. size: 602.262573 MiB 00:06:01.838 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:01.838 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.838 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:01.838 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.838 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:01.838 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1162602_0 00:06:01.838 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:01.838 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1162602_0 00:06:01.838 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:01.838 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1162602_0 00:06:01.838 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:01.838 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.838 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:01.838 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.838 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:01.838 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1162602 00:06:01.838 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:01.838 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1162602 00:06:01.838 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:01.838 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1162602 00:06:01.838 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:01.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.838 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:01.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.838 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:01.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.838 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:01.838 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.838 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:01.838 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1162602 00:06:01.838 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:01.838 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1162602 00:06:01.838 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:01.838 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1162602 00:06:01.838 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:01.838 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1162602 00:06:01.838 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:01.838 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1162602 00:06:01.838 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:01.838 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.838 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:01.838 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.838 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:01.838 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.838 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:01.838 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1162602 00:06:01.838 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:01.838 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.838 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:01.838 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.838 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:01.838 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1162602 00:06:01.838 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:01.838 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.838 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:01.838 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1162602 00:06:01.838 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:01.838 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1162602 00:06:01.838 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:01.838 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.838 23:49:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.838 23:49:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1162602 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1162602 ']' 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1162602 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1162602 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1162602' 00:06:01.838 killing process with pid 1162602 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1162602 00:06:01.838 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1162602 00:06:02.097 00:06:02.097 real 0m0.906s 00:06:02.097 user 0m0.948s 00:06:02.097 sys 0m0.386s 00:06:02.097 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.097 23:49:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 ************************************ 00:06:02.097 END TEST dpdk_mem_utility 00:06:02.097 ************************************ 00:06:02.097 23:49:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.097 23:49:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.097 23:49:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.097 23:49:36 -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 ************************************ 00:06:02.097 START TEST event 00:06:02.097 ************************************ 00:06:02.097 23:49:36 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:02.097 * Looking for test storage... 00:06:02.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.097 23:49:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:02.097 23:49:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.097 23:49:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.097 23:49:36 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:02.097 23:49:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.097 23:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.097 ************************************ 00:06:02.097 START TEST event_perf 00:06:02.097 ************************************ 00:06:02.097 23:49:36 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.097 Running I/O for 1 seconds...[2024-07-15 23:49:36.565244] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:02.097 [2024-07-15 23:49:36.565316] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162758 ] 00:06:02.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.355 [2024-07-15 23:49:36.628319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.355 [2024-07-15 23:49:36.723163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.355 [2024-07-15 23:49:36.723249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.355 [2024-07-15 23:49:36.723312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.355 [2024-07-15 23:49:36.723353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.284 Running I/O for 1 seconds... 00:06:03.284 lcore 0: 230637 00:06:03.284 lcore 1: 230637 00:06:03.284 lcore 2: 230637 00:06:03.284 lcore 3: 230637 00:06:03.284 done. 00:06:03.284 00:06:03.284 real 0m1.237s 00:06:03.284 user 0m4.142s 00:06:03.284 sys 0m0.083s 00:06:03.284 23:49:37 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.284 23:49:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.284 ************************************ 00:06:03.284 END TEST event_perf 00:06:03.284 ************************************ 00:06:03.541 23:49:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.541 23:49:37 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:03.541 23:49:37 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.541 23:49:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.541 ************************************ 00:06:03.541 START TEST event_reactor 00:06:03.541 ************************************ 00:06:03.541 23:49:37 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:03.541 [2024-07-15 23:49:37.856286] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:03.541 [2024-07-15 23:49:37.856365] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162888 ] 00:06:03.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.541 [2024-07-15 23:49:37.914475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.541 [2024-07-15 23:49:38.005451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.908 test_start 00:06:04.908 oneshot 00:06:04.908 tick 100 00:06:04.908 tick 100 00:06:04.908 tick 250 00:06:04.908 tick 100 00:06:04.908 tick 100 00:06:04.908 tick 100 00:06:04.908 tick 250 00:06:04.908 tick 500 00:06:04.908 tick 100 00:06:04.908 tick 100 00:06:04.908 tick 250 00:06:04.908 tick 100 00:06:04.908 tick 100 00:06:04.908 test_end 00:06:04.908 00:06:04.908 real 0m1.225s 00:06:04.908 user 0m1.141s 00:06:04.908 sys 0m0.077s 00:06:04.908 23:49:39 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.908 23:49:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:04.908 ************************************ 00:06:04.908 END TEST event_reactor 00:06:04.908 ************************************ 00:06:04.909 23:49:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.909 23:49:39 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:04.909 23:49:39 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.909 23:49:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.909 ************************************ 00:06:04.909 START TEST event_reactor_perf 00:06:04.909 ************************************ 00:06:04.909 23:49:39 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:04.909 [2024-07-15 23:49:39.134729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:04.909 [2024-07-15 23:49:39.134794] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163024 ] 00:06:04.909 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.909 [2024-07-15 23:49:39.193013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.909 [2024-07-15 23:49:39.284080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.841 test_start 00:06:05.841 test_end 00:06:05.841 Performance: 324030 events per second 00:06:05.841 00:06:05.841 real 0m1.224s 00:06:05.841 user 0m1.143s 00:06:05.841 sys 0m0.076s 00:06:05.841 23:49:40 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.841 23:49:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.841 ************************************ 00:06:05.841 END TEST event_reactor_perf 00:06:05.841 ************************************ 00:06:06.128 23:49:40 event -- event/event.sh@49 -- # uname -s 00:06:06.128 23:49:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.128 23:49:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.128 23:49:40 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.128 23:49:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.128 23:49:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.128 ************************************ 00:06:06.128 START TEST event_scheduler 00:06:06.128 ************************************ 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:06.128 * Looking for test storage... 00:06:06.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:06.128 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.128 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1163244 00:06:06.128 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.128 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.128 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1163244 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1163244 ']' 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:06.128 23:49:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.128 [2024-07-15 23:49:40.508764] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:06.128 [2024-07-15 23:49:40.508869] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163244 ] 00:06:06.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.128 [2024-07-15 23:49:40.569143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.386 [2024-07-15 23:49:40.660042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.386 [2024-07-15 23:49:40.660094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.386 [2024-07-15 23:49:40.660160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.386 [2024-07-15 23:49:40.660164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:06.386 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.386 POWER: Env isn't set yet! 00:06:06.386 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:06.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:06.386 POWER: Cannot get available frequencies of lcore 0 00:06:06.386 POWER: Attempting to initialise PSTAT power management... 00:06:06.386 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:06.386 POWER: Initialized successfully for lcore 0 power management 00:06:06.386 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:06.386 POWER: Initialized successfully for lcore 1 power management 00:06:06.386 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:06.386 POWER: Initialized successfully for lcore 2 power management 00:06:06.386 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:06.386 POWER: Initialized successfully for lcore 3 power management 00:06:06.386 [2024-07-15 23:49:40.795329] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:06.386 [2024-07-15 23:49:40.795350] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:06.386 [2024-07-15 23:49:40.795362] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.386 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.386 [2024-07-15 23:49:40.880798] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.386 23:49:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.386 23:49:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 ************************************ 00:06:06.644 START TEST scheduler_create_thread 00:06:06.644 ************************************ 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 2 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 3 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 4 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 5 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 6 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 7 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 8 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 9 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 10 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:06.644 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.644 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.644 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.644 23:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.645 23:49:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.018 23:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.018 23:49:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:08.018 23:49:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:08.018 23:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.018 23:49:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.388 23:49:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.388 00:06:09.388 real 0m2.620s 00:06:09.388 user 0m0.013s 00:06:09.388 sys 0m0.005s 00:06:09.388 23:49:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.388 23:49:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.389 ************************************ 00:06:09.389 END TEST scheduler_create_thread 00:06:09.389 ************************************ 00:06:09.389 23:49:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:09.389 23:49:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1163244 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1163244 ']' 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1163244 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1163244 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1163244' 00:06:09.389 killing process with pid 1163244 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1163244 00:06:09.389 23:49:43 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1163244 00:06:09.647 [2024-07-15 23:49:44.011909] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:09.647 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:09.647 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:09.647 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:09.647 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:09.647 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:09.647 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:09.647 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:09.647 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:09.905 00:06:09.905 real 0m3.785s 00:06:09.905 user 0m5.893s 00:06:09.905 sys 0m0.324s 00:06:09.905 23:49:44 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.905 23:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 ************************************ 00:06:09.905 END TEST event_scheduler 00:06:09.905 ************************************ 00:06:09.905 23:49:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.905 23:49:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.905 23:49:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.905 23:49:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.905 23:49:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 ************************************ 00:06:09.905 START TEST app_repeat 00:06:09.905 ************************************ 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1163605 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1163605' 00:06:09.905 Process app_repeat pid: 1163605 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.905 spdk_app_start Round 0 00:06:09.905 23:49:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1163605 /var/tmp/spdk-nbd.sock 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1163605 ']' 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.905 23:49:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 [2024-07-15 23:49:44.277224] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:09.905 [2024-07-15 23:49:44.277293] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163605 ] 00:06:09.905 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.905 [2024-07-15 23:49:44.337168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.163 [2024-07-15 23:49:44.433164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.163 [2024-07-15 23:49:44.433200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.163 23:49:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.163 23:49:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:10.163 23:49:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.421 Malloc0 00:06:10.421 23:49:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.989 Malloc1 00:06:10.989 23:49:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.989 23:49:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.247 /dev/nbd0 00:06:11.247 23:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.248 23:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.248 1+0 records in 00:06:11.248 1+0 records out 00:06:11.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173524 s, 23.6 MB/s 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:11.248 23:49:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:11.248 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.248 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.248 23:49:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.506 /dev/nbd1 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.507 1+0 records in 00:06:11.507 1+0 records out 00:06:11.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219583 s, 18.7 MB/s 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:11.507 23:49:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.507 23:49:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.765 { 00:06:11.765 "nbd_device": "/dev/nbd0", 00:06:11.765 "bdev_name": "Malloc0" 00:06:11.765 }, 00:06:11.765 { 00:06:11.765 "nbd_device": "/dev/nbd1", 00:06:11.765 "bdev_name": "Malloc1" 00:06:11.765 } 00:06:11.765 ]' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.765 { 00:06:11.765 "nbd_device": "/dev/nbd0", 00:06:11.765 "bdev_name": "Malloc0" 00:06:11.765 }, 00:06:11.765 { 00:06:11.765 "nbd_device": "/dev/nbd1", 00:06:11.765 "bdev_name": "Malloc1" 00:06:11.765 } 00:06:11.765 ]' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.765 /dev/nbd1' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.765 /dev/nbd1' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.765 256+0 records in 00:06:11.765 256+0 records out 00:06:11.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592193 s, 177 MB/s 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.765 23:49:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.023 256+0 records in 00:06:12.023 256+0 records out 00:06:12.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260101 s, 40.3 MB/s 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.023 256+0 records in 00:06:12.023 256+0 records out 00:06:12.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267808 s, 39.2 MB/s 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:12.023 23:49:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.024 23:49:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.282 23:49:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.540 23:49:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.798 23:49:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.798 23:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.798 23:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.056 23:49:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.056 23:49:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.315 23:49:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.315 [2024-07-15 23:49:47.775583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.573 [2024-07-15 23:49:47.864718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.573 [2024-07-15 23:49:47.864718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.573 [2024-07-15 23:49:47.913456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.573 [2024-07-15 23:49:47.913521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:16.856 23:49:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.856 23:49:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:16.856 spdk_app_start Round 1 00:06:16.856 23:49:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1163605 /var/tmp/spdk-nbd.sock 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1163605 ']' 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.856 23:49:50 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:16.856 23:49:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.856 Malloc0 00:06:16.856 23:49:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.114 Malloc1 00:06:17.114 23:49:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.114 23:49:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.115 23:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.373 /dev/nbd0 00:06:17.373 23:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.373 23:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.373 1+0 records in 00:06:17.373 1+0 records out 00:06:17.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187244 s, 21.9 MB/s 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:17.373 23:49:51 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:17.373 23:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.373 23:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.373 23:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.939 /dev/nbd1 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.939 1+0 records in 00:06:17.939 1+0 records out 00:06:17.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222932 s, 18.4 MB/s 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:17.939 23:49:52 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.939 23:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.197 { 00:06:18.197 "nbd_device": "/dev/nbd0", 00:06:18.197 "bdev_name": "Malloc0" 00:06:18.197 }, 00:06:18.197 { 00:06:18.197 "nbd_device": "/dev/nbd1", 00:06:18.197 "bdev_name": "Malloc1" 00:06:18.197 } 00:06:18.197 ]' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.197 { 00:06:18.197 "nbd_device": "/dev/nbd0", 00:06:18.197 "bdev_name": "Malloc0" 00:06:18.197 }, 00:06:18.197 { 00:06:18.197 "nbd_device": "/dev/nbd1", 00:06:18.197 "bdev_name": "Malloc1" 00:06:18.197 } 00:06:18.197 ]' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.197 /dev/nbd1' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.197 /dev/nbd1' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.197 256+0 records in 00:06:18.197 256+0 records out 00:06:18.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590696 s, 178 MB/s 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.197 256+0 records in 00:06:18.197 256+0 records out 00:06:18.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251862 s, 41.6 MB/s 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.197 256+0 records in 00:06:18.197 256+0 records out 00:06:18.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261986 s, 40.0 MB/s 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.197 23:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.198 23:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.764 23:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.026 23:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.285 23:49:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.285 23:49:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.542 23:49:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.799 [2024-07-15 23:49:54.094823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.799 [2024-07-15 23:49:54.184766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.799 [2024-07-15 23:49:54.184800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.799 [2024-07-15 23:49:54.236060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.799 [2024-07-15 23:49:54.236128] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.073 23:49:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.073 23:49:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:23.073 spdk_app_start Round 2 00:06:23.073 23:49:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1163605 /var/tmp/spdk-nbd.sock 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1163605 ']' 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.073 23:49:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.073 23:49:57 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:23.073 23:49:57 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:23.074 23:49:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.074 Malloc0 00:06:23.074 23:49:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.639 Malloc1 00:06:23.639 23:49:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.639 23:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.896 /dev/nbd0 00:06:23.896 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.896 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.896 23:49:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.896 1+0 records in 00:06:23.896 1+0 records out 00:06:23.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00015856 s, 25.8 MB/s 00:06:23.897 23:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.897 23:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.897 23:49:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.897 23:49:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.897 23:49:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.897 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.897 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.897 23:49:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.154 /dev/nbd1 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.154 1+0 records in 00:06:24.154 1+0 records out 00:06:24.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238426 s, 17.2 MB/s 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:24.154 23:49:58 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.154 23:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd0", 00:06:24.412 "bdev_name": "Malloc0" 00:06:24.412 }, 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd1", 00:06:24.412 "bdev_name": "Malloc1" 00:06:24.412 } 00:06:24.412 ]' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd0", 00:06:24.412 "bdev_name": "Malloc0" 00:06:24.412 }, 00:06:24.412 { 00:06:24.412 "nbd_device": "/dev/nbd1", 00:06:24.412 "bdev_name": "Malloc1" 00:06:24.412 } 00:06:24.412 ]' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.412 /dev/nbd1' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.412 /dev/nbd1' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.412 256+0 records in 00:06:24.412 256+0 records out 00:06:24.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059476 s, 176 MB/s 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.412 256+0 records in 00:06:24.412 256+0 records out 00:06:24.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247103 s, 42.4 MB/s 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.412 23:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.672 256+0 records in 00:06:24.672 256+0 records out 00:06:24.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265712 s, 39.5 MB/s 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.672 23:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.975 23:49:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.233 23:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.491 23:49:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.491 23:49:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.749 23:50:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.008 [2024-07-15 23:50:00.387061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.008 [2024-07-15 23:50:00.476433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.008 [2024-07-15 23:50:00.476463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.266 [2024-07-15 23:50:00.527012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.266 [2024-07-15 23:50:00.527070] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.795 23:50:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1163605 /var/tmp/spdk-nbd.sock 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1163605 ']' 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.795 23:50:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:29.052 23:50:03 event.app_repeat -- event/event.sh@39 -- # killprocess 1163605 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1163605 ']' 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1163605 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.052 23:50:03 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1163605 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1163605' 00:06:29.310 killing process with pid 1163605 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1163605 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1163605 00:06:29.310 spdk_app_start is called in Round 0. 00:06:29.310 Shutdown signal received, stop current app iteration 00:06:29.310 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:29.310 spdk_app_start is called in Round 1. 00:06:29.310 Shutdown signal received, stop current app iteration 00:06:29.310 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:29.310 spdk_app_start is called in Round 2. 00:06:29.310 Shutdown signal received, stop current app iteration 00:06:29.310 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:29.310 spdk_app_start is called in Round 3. 00:06:29.310 Shutdown signal received, stop current app iteration 00:06:29.310 23:50:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.310 23:50:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.310 00:06:29.310 real 0m19.468s 00:06:29.310 user 0m43.509s 00:06:29.310 sys 0m3.492s 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.310 23:50:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.310 ************************************ 00:06:29.310 END TEST app_repeat 00:06:29.310 ************************************ 00:06:29.310 23:50:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.310 23:50:03 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.310 23:50:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.310 23:50:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.310 23:50:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.310 ************************************ 00:06:29.310 START TEST cpu_locks 00:06:29.310 ************************************ 00:06:29.310 23:50:03 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.310 * Looking for test storage... 00:06:29.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:29.569 23:50:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.569 23:50:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.569 23:50:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.569 23:50:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.569 23:50:03 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.569 23:50:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.569 23:50:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 ************************************ 00:06:29.569 START TEST default_locks 00:06:29.569 ************************************ 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1165632 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1165632 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1165632 ']' 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.569 23:50:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 [2024-07-15 23:50:03.906893] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:29.569 [2024-07-15 23:50:03.906983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165632 ] 00:06:29.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.569 [2024-07-15 23:50:03.965500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.569 [2024-07-15 23:50:04.052654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.827 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.828 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:29.828 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1165632 00:06:29.828 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1165632 00:06:29.828 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.394 lslocks: write error 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1165632 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1165632 ']' 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1165632 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1165632 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1165632' 00:06:30.394 killing process with pid 1165632 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1165632 00:06:30.394 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1165632 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1165632 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1165632 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1165632 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1165632 ']' 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1165632) - No such process 00:06:30.653 ERROR: process (pid: 1165632) is no longer running 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.653 00:06:30.653 real 0m1.085s 00:06:30.653 user 0m1.135s 00:06:30.653 sys 0m0.489s 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.653 23:50:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.653 ************************************ 00:06:30.653 END TEST default_locks 00:06:30.653 ************************************ 00:06:30.653 23:50:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:30.653 23:50:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.653 23:50:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.653 23:50:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.653 ************************************ 00:06:30.653 START TEST default_locks_via_rpc 00:06:30.653 ************************************ 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1165764 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1165764 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1165764 ']' 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.653 23:50:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.653 [2024-07-15 23:50:05.043599] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:30.653 [2024-07-15 23:50:05.043690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165764 ] 00:06:30.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.653 [2024-07-15 23:50:05.102209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.912 [2024-07-15 23:50:05.189374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1165764 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1165764 00:06:30.912 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1165764 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1165764 ']' 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1165764 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1165764 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1165764' 00:06:31.478 killing process with pid 1165764 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1165764 00:06:31.478 23:50:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1165764 00:06:31.736 00:06:31.736 real 0m1.082s 00:06:31.736 user 0m1.102s 00:06:31.736 sys 0m0.512s 00:06:31.736 23:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.736 23:50:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.736 ************************************ 00:06:31.736 END TEST default_locks_via_rpc 00:06:31.736 ************************************ 00:06:31.736 23:50:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.736 23:50:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.736 23:50:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.736 23:50:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.736 ************************************ 00:06:31.736 START TEST non_locking_app_on_locked_coremask 00:06:31.736 ************************************ 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1165892 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1165892 /var/tmp/spdk.sock 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1165892 ']' 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.736 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.736 [2024-07-15 23:50:06.181927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:31.736 [2024-07-15 23:50:06.182016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165892 ] 00:06:31.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.736 [2024-07-15 23:50:06.240395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.995 [2024-07-15 23:50:06.328274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1165924 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1165924 /var/tmp/spdk2.sock 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1165924 ']' 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.253 23:50:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.253 [2024-07-15 23:50:06.610719] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:32.253 [2024-07-15 23:50:06.610817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165924 ] 00:06:32.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.253 [2024-07-15 23:50:06.701081] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.253 [2024-07-15 23:50:06.701130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.511 [2024-07-15 23:50:06.882184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.445 23:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.445 23:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:33.445 23:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1165892 00:06:33.445 23:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1165892 00:06:33.445 23:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.010 lslocks: write error 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1165892 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1165892 ']' 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1165892 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1165892 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1165892' 00:06:34.010 killing process with pid 1165892 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1165892 00:06:34.010 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1165892 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1165924 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1165924 ']' 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1165924 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1165924 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1165924' 00:06:34.577 killing process with pid 1165924 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1165924 00:06:34.577 23:50:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1165924 00:06:34.836 00:06:34.836 real 0m3.020s 00:06:34.836 user 0m3.413s 00:06:34.836 sys 0m1.026s 00:06:34.836 23:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.836 23:50:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.836 ************************************ 00:06:34.836 END TEST non_locking_app_on_locked_coremask 00:06:34.836 ************************************ 00:06:34.836 23:50:09 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:34.836 23:50:09 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.836 23:50:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.836 23:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.836 ************************************ 00:06:34.836 START TEST locking_app_on_unlocked_coremask 00:06:34.836 ************************************ 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1166229 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1166229 /var/tmp/spdk.sock 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166229 ']' 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.836 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.836 [2024-07-15 23:50:09.262908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:34.836 [2024-07-15 23:50:09.263008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166229 ] 00:06:34.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.836 [2024-07-15 23:50:09.322993] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.836 [2024-07-15 23:50:09.323043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.094 [2024-07-15 23:50:09.413686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1166239 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1166239 /var/tmp/spdk2.sock 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166239 ']' 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.353 23:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.353 [2024-07-15 23:50:09.690150] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:35.353 [2024-07-15 23:50:09.690251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166239 ] 00:06:35.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.353 [2024-07-15 23:50:09.778868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.610 [2024-07-15 23:50:09.958110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.543 23:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:36.543 23:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:36.543 23:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1166239 00:06:36.543 23:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1166239 00:06:36.543 23:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.109 lslocks: write error 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1166229 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1166229 ']' 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1166229 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166229 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166229' 00:06:37.109 killing process with pid 1166229 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1166229 00:06:37.109 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1166229 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1166239 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1166239 ']' 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1166239 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166239 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166239' 00:06:37.674 killing process with pid 1166239 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1166239 00:06:37.674 23:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1166239 00:06:37.932 00:06:37.932 real 0m2.996s 00:06:37.932 user 0m3.336s 00:06:37.932 sys 0m1.055s 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 ************************************ 00:06:37.932 END TEST locking_app_on_unlocked_coremask 00:06:37.932 ************************************ 00:06:37.932 23:50:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.932 23:50:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:37.932 23:50:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.932 23:50:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 ************************************ 00:06:37.932 START TEST locking_app_on_locked_coremask 00:06:37.932 ************************************ 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1166563 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1166563 /var/tmp/spdk.sock 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166563 ']' 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.932 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 [2024-07-15 23:50:12.314451] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:37.932 [2024-07-15 23:50:12.314548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166563 ] 00:06:37.932 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.932 [2024-07-15 23:50:12.374111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.190 [2024-07-15 23:50:12.464882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1166574 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1166574 /var/tmp/spdk2.sock 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1166574 /var/tmp/spdk2.sock 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1166574 /var/tmp/spdk2.sock 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166574 ']' 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.190 23:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.448 [2024-07-15 23:50:12.737088] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:38.448 [2024-07-15 23:50:12.737184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166574 ] 00:06:38.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.448 [2024-07-15 23:50:12.823974] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1166563 has claimed it. 00:06:38.448 [2024-07-15 23:50:12.824022] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1166574) - No such process 00:06:39.011 ERROR: process (pid: 1166574) is no longer running 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1166563 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1166563 00:06:39.011 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.575 lslocks: write error 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1166563 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1166563 ']' 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1166563 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166563 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166563' 00:06:39.576 killing process with pid 1166563 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1166563 00:06:39.576 23:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1166563 00:06:39.832 00:06:39.832 real 0m1.922s 00:06:39.832 user 0m2.188s 00:06:39.832 sys 0m0.640s 00:06:39.832 23:50:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.832 23:50:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.832 ************************************ 00:06:39.832 END TEST locking_app_on_locked_coremask 00:06:39.832 ************************************ 00:06:39.832 23:50:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.832 23:50:14 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.832 23:50:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.832 23:50:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.832 ************************************ 00:06:39.832 START TEST locking_overlapped_coremask 00:06:39.832 ************************************ 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1166726 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1166726 /var/tmp/spdk.sock 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166726 ']' 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.832 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.832 [2024-07-15 23:50:14.290011] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:39.832 [2024-07-15 23:50:14.290098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166726 ] 00:06:39.832 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.090 [2024-07-15 23:50:14.348847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.090 [2024-07-15 23:50:14.437790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.090 [2024-07-15 23:50:14.437843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.090 [2024-07-15 23:50:14.437846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1166808 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1166808 /var/tmp/spdk2.sock 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1166808 /var/tmp/spdk2.sock 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1166808 /var/tmp/spdk2.sock 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1166808 ']' 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.348 23:50:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.348 [2024-07-15 23:50:14.713988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.348 [2024-07-15 23:50:14.714094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166808 ] 00:06:40.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.348 [2024-07-15 23:50:14.804610] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1166726 has claimed it. 00:06:40.348 [2024-07-15 23:50:14.804663] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1166808) - No such process 00:06:41.281 ERROR: process (pid: 1166808) is no longer running 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1166726 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1166726 ']' 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1166726 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166726 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166726' 00:06:41.281 killing process with pid 1166726 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1166726 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1166726 00:06:41.281 00:06:41.281 real 0m1.516s 00:06:41.281 user 0m4.219s 00:06:41.281 sys 0m0.432s 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.281 23:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.281 ************************************ 00:06:41.281 END TEST locking_overlapped_coremask 00:06:41.281 ************************************ 00:06:41.281 23:50:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.281 23:50:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.281 23:50:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.281 23:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.549 ************************************ 00:06:41.549 START TEST locking_overlapped_coremask_via_rpc 00:06:41.549 ************************************ 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1166938 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1166938 /var/tmp/spdk.sock 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1166938 ']' 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.549 23:50:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.549 [2024-07-15 23:50:15.864208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.549 [2024-07-15 23:50:15.864309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166938 ] 00:06:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.549 [2024-07-15 23:50:15.928103] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.549 [2024-07-15 23:50:15.928159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.549 [2024-07-15 23:50:16.020609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.549 [2024-07-15 23:50:16.020658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.549 [2024-07-15 23:50:16.020693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1166948 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1166948 /var/tmp/spdk2.sock 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1166948 ']' 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.862 23:50:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.862 [2024-07-15 23:50:16.299336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.862 [2024-07-15 23:50:16.299438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166948 ] 00:06:41.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.119 [2024-07-15 23:50:16.388348] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.119 [2024-07-15 23:50:16.388389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.119 [2024-07-15 23:50:16.566184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.119 [2024-07-15 23:50:16.569191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.119 [2024-07-15 23:50:16.569193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.052 [2024-07-15 23:50:17.350246] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1166938 has claimed it. 00:06:43.052 request: 00:06:43.052 { 00:06:43.052 "method": "framework_enable_cpumask_locks", 00:06:43.052 "req_id": 1 00:06:43.052 } 00:06:43.052 Got JSON-RPC error response 00:06:43.052 response: 00:06:43.052 { 00:06:43.052 "code": -32603, 00:06:43.052 "message": "Failed to claim CPU core: 2" 00:06:43.052 } 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1166938 /var/tmp/spdk.sock 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1166938 ']' 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.052 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1166948 /var/tmp/spdk2.sock 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1166948 ']' 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.310 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.567 00:06:43.567 real 0m2.154s 00:06:43.567 user 0m1.240s 00:06:43.567 sys 0m0.206s 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.567 23:50:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.567 ************************************ 00:06:43.567 END TEST locking_overlapped_coremask_via_rpc 00:06:43.567 ************************************ 00:06:43.567 23:50:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:43.567 23:50:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1166938 ]] 00:06:43.567 23:50:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1166938 00:06:43.567 23:50:17 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1166938 ']' 00:06:43.567 23:50:17 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1166938 00:06:43.567 23:50:17 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:43.567 23:50:17 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.567 23:50:17 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166938 00:06:43.567 23:50:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.567 23:50:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.567 23:50:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166938' 00:06:43.568 killing process with pid 1166938 00:06:43.568 23:50:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1166938 00:06:43.568 23:50:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1166938 00:06:43.826 23:50:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1166948 ]] 00:06:43.826 23:50:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1166948 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1166948 ']' 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1166948 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1166948 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1166948' 00:06:43.826 killing process with pid 1166948 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1166948 00:06:43.826 23:50:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1166948 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1166938 ]] 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1166938 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1166938 ']' 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1166938 00:06:44.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1166938) - No such process 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1166938 is not found' 00:06:44.084 Process with pid 1166938 is not found 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1166948 ]] 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1166948 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1166948 ']' 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1166948 00:06:44.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1166948) - No such process 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1166948 is not found' 00:06:44.084 Process with pid 1166948 is not found 00:06:44.084 23:50:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.084 00:06:44.084 real 0m14.817s 00:06:44.084 user 0m27.424s 00:06:44.084 sys 0m5.202s 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.084 23:50:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.084 ************************************ 00:06:44.084 END TEST cpu_locks 00:06:44.084 ************************************ 00:06:44.342 00:06:44.342 real 0m42.144s 00:06:44.342 user 1m23.383s 00:06:44.342 sys 0m9.529s 00:06:44.342 23:50:18 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.342 23:50:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.342 ************************************ 00:06:44.342 END TEST event 00:06:44.342 ************************************ 00:06:44.342 23:50:18 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:44.342 23:50:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.342 23:50:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.342 23:50:18 -- common/autotest_common.sh@10 -- # set +x 00:06:44.342 ************************************ 00:06:44.342 START TEST thread 00:06:44.342 ************************************ 00:06:44.342 23:50:18 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:44.342 * Looking for test storage... 00:06:44.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:44.342 23:50:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.342 23:50:18 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:44.342 23:50:18 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.342 23:50:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.342 ************************************ 00:06:44.342 START TEST thread_poller_perf 00:06:44.342 ************************************ 00:06:44.342 23:50:18 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.342 [2024-07-15 23:50:18.757318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:44.342 [2024-07-15 23:50:18.757384] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167343 ] 00:06:44.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.342 [2024-07-15 23:50:18.815820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.600 [2024-07-15 23:50:18.906306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.600 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.535 ====================================== 00:06:45.535 busy:2717188772 (cyc) 00:06:45.535 total_run_count: 262000 00:06:45.535 tsc_hz: 2700000000 (cyc) 00:06:45.535 ====================================== 00:06:45.535 poller_cost: 10370 (cyc), 3840 (nsec) 00:06:45.535 00:06:45.535 real 0m1.238s 00:06:45.535 user 0m1.160s 00:06:45.535 sys 0m0.071s 00:06:45.535 23:50:19 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.535 23:50:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.535 ************************************ 00:06:45.535 END TEST thread_poller_perf 00:06:45.535 ************************************ 00:06:45.535 23:50:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.535 23:50:20 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:45.535 23:50:20 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.535 23:50:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.535 ************************************ 00:06:45.535 START TEST thread_poller_perf 00:06:45.535 ************************************ 00:06:45.535 23:50:20 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.535 [2024-07-15 23:50:20.049246] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:45.535 [2024-07-15 23:50:20.049315] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167465 ] 00:06:45.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.794 [2024-07-15 23:50:20.108948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.794 [2024-07-15 23:50:20.199527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.794 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:47.167 ====================================== 00:06:47.168 busy:2703192104 (cyc) 00:06:47.168 total_run_count: 3641000 00:06:47.168 tsc_hz: 2700000000 (cyc) 00:06:47.168 ====================================== 00:06:47.168 poller_cost: 742 (cyc), 274 (nsec) 00:06:47.168 00:06:47.168 real 0m1.229s 00:06:47.168 user 0m1.143s 00:06:47.168 sys 0m0.078s 00:06:47.168 23:50:21 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.168 23:50:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.168 ************************************ 00:06:47.168 END TEST thread_poller_perf 00:06:47.168 ************************************ 00:06:47.168 23:50:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.168 00:06:47.168 real 0m2.630s 00:06:47.168 user 0m2.382s 00:06:47.168 sys 0m0.244s 00:06:47.168 23:50:21 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.168 23:50:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.168 ************************************ 00:06:47.168 END TEST thread 00:06:47.168 ************************************ 00:06:47.168 23:50:21 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:47.168 23:50:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:47.168 23:50:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.168 23:50:21 -- common/autotest_common.sh@10 -- # set +x 00:06:47.168 ************************************ 00:06:47.168 START TEST accel 00:06:47.168 ************************************ 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:47.168 * Looking for test storage... 00:06:47.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:47.168 23:50:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:47.168 23:50:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:47.168 23:50:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.168 23:50:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1167624 00:06:47.168 23:50:21 accel -- accel/accel.sh@63 -- # waitforlisten 1167624 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@827 -- # '[' -z 1167624 ']' 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.168 23:50:21 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:47.168 23:50:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.168 23:50:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.168 23:50:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.168 23:50:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.168 23:50:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.168 23:50:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.168 23:50:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.168 23:50:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.168 23:50:21 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.168 [2024-07-15 23:50:21.448889] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.168 [2024-07-15 23:50:21.448978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167624 ] 00:06:47.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.168 [2024-07-15 23:50:21.507755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.168 [2024-07-15 23:50:21.594942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.426 23:50:21 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.426 23:50:21 accel -- common/autotest_common.sh@860 -- # return 0 00:06:47.426 23:50:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:47.426 23:50:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:47.426 23:50:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:47.426 23:50:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:47.426 23:50:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:47.426 23:50:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:47.426 23:50:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:47.426 23:50:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.426 23:50:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.426 23:50:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.426 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.426 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.426 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # IFS== 00:06:47.427 23:50:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:47.427 23:50:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:47.427 23:50:21 accel -- accel/accel.sh@75 -- # killprocess 1167624 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@946 -- # '[' -z 1167624 ']' 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@950 -- # kill -0 1167624 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@951 -- # uname 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1167624 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1167624' 00:06:47.427 killing process with pid 1167624 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@965 -- # kill 1167624 00:06:47.427 23:50:21 accel -- common/autotest_common.sh@970 -- # wait 1167624 00:06:47.685 23:50:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:47.685 23:50:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:47.685 23:50:22 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:47.685 23:50:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.685 23:50:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.685 23:50:22 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:47.685 23:50:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:47.685 23:50:22 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:47.685 23:50:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:47.943 23:50:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:47.943 23:50:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:47.943 23:50:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:47.943 23:50:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.943 ************************************ 00:06:47.943 START TEST accel_missing_filename 00:06:47.943 ************************************ 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.943 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:47.943 23:50:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:47.943 [2024-07-15 23:50:22.263920] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:47.943 [2024-07-15 23:50:22.263989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167760 ] 00:06:47.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.943 [2024-07-15 23:50:22.322243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.943 [2024-07-15 23:50:22.412793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.202 [2024-07-15 23:50:22.464302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.202 [2024-07-15 23:50:22.513403] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.202 A filename is required. 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.202 00:06:48.202 real 0m0.331s 00:06:48.202 user 0m0.239s 00:06:48.202 sys 0m0.128s 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.202 23:50:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:48.202 ************************************ 00:06:48.202 END TEST accel_missing_filename 00:06:48.202 ************************************ 00:06:48.202 23:50:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.202 23:50:22 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:48.202 23:50:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.202 23:50:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.202 ************************************ 00:06:48.202 START TEST accel_compress_verify 00:06:48.202 ************************************ 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.202 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.202 23:50:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.202 [2024-07-15 23:50:22.642674] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.202 [2024-07-15 23:50:22.642744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167789 ] 00:06:48.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.202 [2024-07-15 23:50:22.701067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.462 [2024-07-15 23:50:22.791990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.462 [2024-07-15 23:50:22.843257] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.462 [2024-07-15 23:50:22.892394] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.462 00:06:48.462 Compression does not support the verify option, aborting. 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.462 00:06:48.462 real 0m0.330s 00:06:48.462 user 0m0.235s 00:06:48.462 sys 0m0.131s 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.462 23:50:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.462 ************************************ 00:06:48.462 END TEST accel_compress_verify 00:06:48.462 ************************************ 00:06:48.721 23:50:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.721 23:50:22 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:48.721 23:50:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.721 23:50:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.721 ************************************ 00:06:48.721 START TEST accel_wrong_workload 00:06:48.721 ************************************ 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.721 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:48.721 23:50:23 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:48.721 Unsupported workload type: foobar 00:06:48.721 [2024-07-15 23:50:23.022968] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.721 accel_perf options: 00:06:48.721 [-h help message] 00:06:48.721 [-q queue depth per core] 00:06:48.721 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.721 [-T number of threads per core 00:06:48.722 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.722 [-t time in seconds] 00:06:48.722 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.722 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.722 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.722 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.722 [-S for crc32c workload, use this seed value (default 0) 00:06:48.722 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.722 [-f for fill workload, use this BYTE value (default 255) 00:06:48.722 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.722 [-y verify result if this switch is on] 00:06:48.722 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.722 Can be used to spread operations across a wider range of memory. 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.722 00:06:48.722 real 0m0.022s 00:06:48.722 user 0m0.013s 00:06:48.722 sys 0m0.009s 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.722 23:50:23 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:48.722 ************************************ 00:06:48.722 END TEST accel_wrong_workload 00:06:48.722 ************************************ 00:06:48.722 23:50:23 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.722 Error: writing output failed: Broken pipe 00:06:48.722 ************************************ 00:06:48.722 START TEST accel_negative_buffers 00:06:48.722 ************************************ 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:48.722 23:50:23 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:48.722 -x option must be non-negative. 00:06:48.722 [2024-07-15 23:50:23.090257] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.722 accel_perf options: 00:06:48.722 [-h help message] 00:06:48.722 [-q queue depth per core] 00:06:48.722 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.722 [-T number of threads per core 00:06:48.722 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.722 [-t time in seconds] 00:06:48.722 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.722 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.722 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.722 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.722 [-S for crc32c workload, use this seed value (default 0) 00:06:48.722 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.722 [-f for fill workload, use this BYTE value (default 255) 00:06:48.722 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.722 [-y verify result if this switch is on] 00:06:48.722 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.722 Can be used to spread operations across a wider range of memory. 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.722 00:06:48.722 real 0m0.023s 00:06:48.722 user 0m0.012s 00:06:48.722 sys 0m0.011s 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.722 23:50:23 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:48.722 ************************************ 00:06:48.722 END TEST accel_negative_buffers 00:06:48.722 ************************************ 00:06:48.722 Error: writing output failed: Broken pipe 00:06:48.722 23:50:23 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.722 23:50:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.722 ************************************ 00:06:48.722 START TEST accel_crc32c 00:06:48.722 ************************************ 00:06:48.722 23:50:23 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.722 23:50:23 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.722 [2024-07-15 23:50:23.155012] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.722 [2024-07-15 23:50:23.155078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167936 ] 00:06:48.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.722 [2024-07-15 23:50:23.212930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.979 [2024-07-15 23:50:23.302642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.980 23:50:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:50.353 23:50:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.353 00:06:50.353 real 0m1.329s 00:06:50.353 user 0m1.217s 00:06:50.353 sys 0m0.115s 00:06:50.353 23:50:24 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.353 23:50:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:50.353 ************************************ 00:06:50.353 END TEST accel_crc32c 00:06:50.353 ************************************ 00:06:50.353 23:50:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.353 23:50:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:50.353 23:50:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.353 23:50:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.353 ************************************ 00:06:50.353 START TEST accel_crc32c_C2 00:06:50.353 ************************************ 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:50.353 [2024-07-15 23:50:24.537487] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.353 [2024-07-15 23:50:24.537557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168066 ] 00:06:50.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.353 [2024-07-15 23:50:24.596347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.353 [2024-07-15 23:50:24.686442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.353 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:50.354 23:50:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.726 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.727 00:06:51.727 real 0m1.337s 00:06:51.727 user 0m1.211s 00:06:51.727 sys 0m0.127s 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.727 23:50:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:51.727 ************************************ 00:06:51.727 END TEST accel_crc32c_C2 00:06:51.727 ************************************ 00:06:51.727 23:50:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.727 23:50:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:51.727 23:50:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.727 23:50:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.727 ************************************ 00:06:51.727 START TEST accel_copy 00:06:51.727 ************************************ 00:06:51.727 23:50:25 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:51.727 23:50:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:51.727 [2024-07-15 23:50:25.923689] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.727 [2024-07-15 23:50:25.923754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168187 ] 00:06:51.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.727 [2024-07-15 23:50:25.981746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.727 [2024-07-15 23:50:26.072488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:51.727 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.728 23:50:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:53.101 23:50:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.101 00:06:53.101 real 0m1.334s 00:06:53.101 user 0m1.205s 00:06:53.101 sys 0m0.129s 00:06:53.101 23:50:27 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.101 23:50:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:53.101 ************************************ 00:06:53.101 END TEST accel_copy 00:06:53.101 ************************************ 00:06:53.101 23:50:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.101 23:50:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:53.101 23:50:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:53.101 23:50:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.101 ************************************ 00:06:53.101 START TEST accel_fill 00:06:53.101 ************************************ 00:06:53.101 23:50:27 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:53.101 [2024-07-15 23:50:27.306774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.101 [2024-07-15 23:50:27.306841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168306 ] 00:06:53.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.101 [2024-07-15 23:50:27.364511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.101 [2024-07-15 23:50:27.455214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:53.101 23:50:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:54.474 23:50:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.474 00:06:54.474 real 0m1.327s 00:06:54.474 user 0m1.203s 00:06:54.474 sys 0m0.126s 00:06:54.474 23:50:28 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.474 23:50:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:54.474 ************************************ 00:06:54.474 END TEST accel_fill 00:06:54.474 ************************************ 00:06:54.474 23:50:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:54.474 23:50:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:54.474 23:50:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.474 23:50:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.474 ************************************ 00:06:54.474 START TEST accel_copy_crc32c 00:06:54.474 ************************************ 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:54.474 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:54.474 [2024-07-15 23:50:28.681107] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:54.475 [2024-07-15 23:50:28.681190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168521 ] 00:06:54.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.475 [2024-07-15 23:50:28.739073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.475 [2024-07-15 23:50:28.829873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:54.475 23:50:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.850 00:06:55.850 real 0m1.334s 00:06:55.850 user 0m1.201s 00:06:55.850 sys 0m0.136s 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.850 23:50:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:55.850 ************************************ 00:06:55.850 END TEST accel_copy_crc32c 00:06:55.850 ************************************ 00:06:55.850 23:50:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.850 23:50:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:55.850 23:50:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.850 23:50:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.850 ************************************ 00:06:55.850 START TEST accel_copy_crc32c_C2 00:06:55.850 ************************************ 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:55.850 [2024-07-15 23:50:30.068764] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.850 [2024-07-15 23:50:30.068832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168640 ] 00:06:55.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.850 [2024-07-15 23:50:30.127959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.850 [2024-07-15 23:50:30.217243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:55.850 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:55.851 23:50:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.222 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.223 00:06:57.223 real 0m1.330s 00:06:57.223 user 0m1.213s 00:06:57.223 sys 0m0.120s 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.223 23:50:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:57.223 ************************************ 00:06:57.223 END TEST accel_copy_crc32c_C2 00:06:57.223 ************************************ 00:06:57.223 23:50:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:57.223 23:50:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:57.223 23:50:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.223 23:50:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.223 ************************************ 00:06:57.223 START TEST accel_dualcast 00:06:57.223 ************************************ 00:06:57.223 23:50:31 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:57.223 [2024-07-15 23:50:31.453262] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.223 [2024-07-15 23:50:31.453332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168763 ] 00:06:57.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.223 [2024-07-15 23:50:31.512585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.223 [2024-07-15 23:50:31.603879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:57.223 23:50:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:58.599 23:50:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.599 00:06:58.599 real 0m1.339s 00:06:58.599 user 0m1.215s 00:06:58.599 sys 0m0.125s 00:06:58.599 23:50:32 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.599 23:50:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:58.599 ************************************ 00:06:58.599 END TEST accel_dualcast 00:06:58.599 ************************************ 00:06:58.599 23:50:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.599 23:50:32 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:58.599 23:50:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.599 23:50:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.599 ************************************ 00:06:58.599 START TEST accel_compare 00:06:58.599 ************************************ 00:06:58.599 23:50:32 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:58.599 23:50:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:58.599 [2024-07-15 23:50:32.839294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.599 [2024-07-15 23:50:32.839366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168891 ] 00:06:58.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.599 [2024-07-15 23:50:32.897927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.599 [2024-07-15 23:50:32.987598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.599 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:58.600 23:50:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:00.009 23:50:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.009 00:07:00.009 real 0m1.328s 00:07:00.009 user 0m1.206s 00:07:00.009 sys 0m0.123s 00:07:00.009 23:50:34 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.009 23:50:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 ************************************ 00:07:00.009 END TEST accel_compare 00:07:00.009 ************************************ 00:07:00.009 23:50:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:00.009 23:50:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.009 23:50:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.009 23:50:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.009 ************************************ 00:07:00.009 START TEST accel_xor 00:07:00.009 ************************************ 00:07:00.009 23:50:34 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:00.009 [2024-07-15 23:50:34.215642] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:00.009 [2024-07-15 23:50:34.215711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169098 ] 00:07:00.009 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.009 [2024-07-15 23:50:34.273663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.009 [2024-07-15 23:50:34.364320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:00.009 23:50:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.385 00:07:01.385 real 0m1.339s 00:07:01.385 user 0m1.221s 00:07:01.385 sys 0m0.121s 00:07:01.385 23:50:35 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.385 23:50:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:01.385 ************************************ 00:07:01.385 END TEST accel_xor 00:07:01.385 ************************************ 00:07:01.385 23:50:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:01.385 23:50:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:01.385 23:50:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.385 23:50:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.385 ************************************ 00:07:01.385 START TEST accel_xor 00:07:01.385 ************************************ 00:07:01.385 23:50:35 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.385 23:50:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:01.386 [2024-07-15 23:50:35.611213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.386 [2024-07-15 23:50:35.611291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169227 ] 00:07:01.386 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.386 [2024-07-15 23:50:35.671089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.386 [2024-07-15 23:50:35.761655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.386 23:50:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:02.760 23:50:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.760 00:07:02.760 real 0m1.340s 00:07:02.760 user 0m1.221s 00:07:02.760 sys 0m0.122s 00:07:02.760 23:50:36 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:02.760 23:50:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 ************************************ 00:07:02.760 END TEST accel_xor 00:07:02.760 ************************************ 00:07:02.760 23:50:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:02.760 23:50:36 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:02.760 23:50:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.760 23:50:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.760 ************************************ 00:07:02.760 START TEST accel_dif_verify 00:07:02.760 ************************************ 00:07:02.760 23:50:36 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:02.760 23:50:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:02.760 [2024-07-15 23:50:37.004485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:02.760 [2024-07-15 23:50:37.004550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169346 ] 00:07:02.760 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.760 [2024-07-15 23:50:37.062806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.760 [2024-07-15 23:50:37.153423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.760 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:02.761 23:50:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:04.135 23:50:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.135 00:07:04.135 real 0m1.340s 00:07:04.135 user 0m1.221s 00:07:04.135 sys 0m0.123s 00:07:04.135 23:50:38 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.135 23:50:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:04.135 ************************************ 00:07:04.135 END TEST accel_dif_verify 00:07:04.135 ************************************ 00:07:04.135 23:50:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:04.135 23:50:38 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:04.135 23:50:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.135 23:50:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.135 ************************************ 00:07:04.135 START TEST accel_dif_generate 00:07:04.135 ************************************ 00:07:04.135 23:50:38 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:04.135 [2024-07-15 23:50:38.395103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.135 [2024-07-15 23:50:38.395198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169469 ] 00:07:04.135 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.135 [2024-07-15 23:50:38.453701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.135 [2024-07-15 23:50:38.544167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.135 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:04.136 23:50:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.507 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:05.508 23:50:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.508 00:07:05.508 real 0m1.333s 00:07:05.508 user 0m1.209s 00:07:05.508 sys 0m0.128s 00:07:05.508 23:50:39 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.508 23:50:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:05.508 ************************************ 00:07:05.508 END TEST accel_dif_generate 00:07:05.508 ************************************ 00:07:05.508 23:50:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:05.508 23:50:39 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:05.508 23:50:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.508 23:50:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.508 ************************************ 00:07:05.508 START TEST accel_dif_generate_copy 00:07:05.508 ************************************ 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:05.508 [2024-07-15 23:50:39.774982] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.508 [2024-07-15 23:50:39.775051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169684 ] 00:07:05.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.508 [2024-07-15 23:50:39.833920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.508 [2024-07-15 23:50:39.924551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:05.508 23:50:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.882 00:07:06.882 real 0m1.339s 00:07:06.882 user 0m1.210s 00:07:06.882 sys 0m0.131s 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.882 23:50:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.882 ************************************ 00:07:06.882 END TEST accel_dif_generate_copy 00:07:06.882 ************************************ 00:07:06.882 23:50:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:06.882 23:50:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.882 23:50:41 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:06.882 23:50:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.882 23:50:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.882 ************************************ 00:07:06.882 START TEST accel_comp 00:07:06.882 ************************************ 00:07:06.882 23:50:41 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.882 23:50:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:06.883 [2024-07-15 23:50:41.162431] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:06.883 [2024-07-15 23:50:41.162498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169806 ] 00:07:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.883 [2024-07-15 23:50:41.220511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.883 [2024-07-15 23:50:41.311135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:06.883 23:50:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.255 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.255 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.255 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.255 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.255 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:08.256 23:50:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.256 00:07:08.256 real 0m1.338s 00:07:08.256 user 0m1.210s 00:07:08.256 sys 0m0.131s 00:07:08.256 23:50:42 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.256 23:50:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:08.256 ************************************ 00:07:08.256 END TEST accel_comp 00:07:08.256 ************************************ 00:07:08.256 23:50:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.256 23:50:42 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:08.256 23:50:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.256 23:50:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.256 ************************************ 00:07:08.256 START TEST accel_decomp 00:07:08.256 ************************************ 00:07:08.256 23:50:42 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:08.256 [2024-07-15 23:50:42.549996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:08.256 [2024-07-15 23:50:42.550060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169925 ] 00:07:08.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.256 [2024-07-15 23:50:42.608348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.256 [2024-07-15 23:50:42.699252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.256 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:08.257 23:50:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.630 23:50:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.630 00:07:09.630 real 0m1.339s 00:07:09.630 user 0m1.213s 00:07:09.630 sys 0m0.128s 00:07:09.630 23:50:43 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.630 23:50:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:09.630 ************************************ 00:07:09.630 END TEST accel_decomp 00:07:09.630 ************************************ 00:07:09.630 23:50:43 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.630 23:50:43 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:09.630 23:50:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.630 23:50:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.630 ************************************ 00:07:09.630 START TEST accel_decmop_full 00:07:09.630 ************************************ 00:07:09.630 23:50:43 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:09.630 23:50:43 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:09.630 [2024-07-15 23:50:43.938337] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:09.630 [2024-07-15 23:50:43.938402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170052 ] 00:07:09.630 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.630 [2024-07-15 23:50:43.996112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.630 [2024-07-15 23:50:44.086708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:09.630 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.631 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.631 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:09.888 23:50:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.821 23:50:45 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.821 00:07:10.821 real 0m1.352s 00:07:10.821 user 0m1.240s 00:07:10.821 sys 0m0.115s 00:07:10.821 23:50:45 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.821 23:50:45 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:10.821 ************************************ 00:07:10.821 END TEST accel_decmop_full 00:07:10.821 ************************************ 00:07:10.821 23:50:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.821 23:50:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:10.821 23:50:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.821 23:50:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.821 ************************************ 00:07:10.821 START TEST accel_decomp_mcore 00:07:10.821 ************************************ 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:10.821 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:10.821 [2024-07-15 23:50:45.334745] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:10.821 [2024-07-15 23:50:45.334812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170258 ] 00:07:11.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.079 [2024-07-15 23:50:45.394277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.079 [2024-07-15 23:50:45.486872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.079 [2024-07-15 23:50:45.486974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.079 [2024-07-15 23:50:45.486977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.079 [2024-07-15 23:50:45.486923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.079 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:11.080 23:50:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.452 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.453 00:07:12.453 real 0m1.348s 00:07:12.453 user 0m4.529s 00:07:12.453 sys 0m0.143s 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.453 23:50:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:12.453 ************************************ 00:07:12.453 END TEST accel_decomp_mcore 00:07:12.453 ************************************ 00:07:12.453 23:50:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.453 23:50:46 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:12.453 23:50:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.453 23:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.453 ************************************ 00:07:12.453 START TEST accel_decomp_full_mcore 00:07:12.453 ************************************ 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:12.453 [2024-07-15 23:50:46.735080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:12.453 [2024-07-15 23:50:46.735156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170394 ] 00:07:12.453 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.453 [2024-07-15 23:50:46.794055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.453 [2024-07-15 23:50:46.887774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.453 [2024-07-15 23:50:46.887827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.453 [2024-07-15 23:50:46.887878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.453 [2024-07-15 23:50:46.887881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.453 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.454 23:50:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.827 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.828 00:07:13.828 real 0m1.367s 00:07:13.828 user 0m4.595s 00:07:13.828 sys 0m0.143s 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.828 23:50:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:13.828 ************************************ 00:07:13.828 END TEST accel_decomp_full_mcore 00:07:13.828 ************************************ 00:07:13.828 23:50:48 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.828 23:50:48 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:13.828 23:50:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.828 23:50:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.828 ************************************ 00:07:13.828 START TEST accel_decomp_mthread 00:07:13.828 ************************************ 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:13.828 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:13.828 [2024-07-15 23:50:48.153544] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.828 [2024-07-15 23:50:48.153612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170517 ] 00:07:13.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.828 [2024-07-15 23:50:48.211522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.828 [2024-07-15 23:50:48.302276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.086 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.086 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.086 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.086 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:14.087 23:50:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.021 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.021 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.021 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.021 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.021 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.022 00:07:15.022 real 0m1.348s 00:07:15.022 user 0m1.222s 00:07:15.022 sys 0m0.127s 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.022 23:50:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:15.022 ************************************ 00:07:15.022 END TEST accel_decomp_mthread 00:07:15.022 ************************************ 00:07:15.022 23:50:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.022 23:50:49 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:15.022 23:50:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.022 23:50:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.281 ************************************ 00:07:15.281 START TEST accel_decomp_full_mthread 00:07:15.281 ************************************ 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:15.281 [2024-07-15 23:50:49.555029] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:15.281 [2024-07-15 23:50:49.555107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170636 ] 00:07:15.281 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.281 [2024-07-15 23:50:49.614362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.281 [2024-07-15 23:50:49.704649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.281 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.282 23:50:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.663 00:07:16.663 real 0m1.379s 00:07:16.663 user 0m1.256s 00:07:16.663 sys 0m0.125s 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.663 23:50:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:16.663 ************************************ 00:07:16.663 END TEST accel_decomp_full_mthread 00:07:16.663 ************************************ 00:07:16.663 23:50:50 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:16.663 23:50:50 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:16.663 23:50:50 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:16.663 23:50:50 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:16.663 23:50:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.663 23:50:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.663 23:50:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.663 23:50:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.663 23:50:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.663 23:50:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.663 23:50:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.663 23:50:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:16.663 23:50:50 accel -- accel/accel.sh@41 -- # jq -r . 00:07:16.663 ************************************ 00:07:16.663 START TEST accel_dif_functional_tests 00:07:16.663 ************************************ 00:07:16.663 23:50:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:16.663 [2024-07-15 23:50:51.002411] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.663 [2024-07-15 23:50:51.002511] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170852 ] 00:07:16.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.663 [2024-07-15 23:50:51.062825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.663 [2024-07-15 23:50:51.155444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.663 [2024-07-15 23:50:51.155526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.663 [2024-07-15 23:50:51.155559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.922 00:07:16.922 00:07:16.922 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.922 http://cunit.sourceforge.net/ 00:07:16.922 00:07:16.922 00:07:16.922 Suite: accel_dif 00:07:16.922 Test: verify: DIF generated, GUARD check ...passed 00:07:16.922 Test: verify: DIF generated, APPTAG check ...passed 00:07:16.922 Test: verify: DIF generated, REFTAG check ...passed 00:07:16.922 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:50:51.235052] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:16.922 passed 00:07:16.922 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:50:51.235125] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:16.922 passed 00:07:16.922 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:50:51.235179] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:16.922 passed 00:07:16.922 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:16.922 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:50:51.235266] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:16.922 passed 00:07:16.922 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:16.922 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:16.922 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:16.922 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:50:51.235421] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:16.922 passed 00:07:16.922 Test: verify copy: DIF generated, GUARD check ...passed 00:07:16.922 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:16.922 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:16.922 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:50:51.235607] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:16.922 passed 00:07:16.922 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:50:51.235648] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:16.922 passed 00:07:16.922 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:50:51.235685] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:16.922 passed 00:07:16.922 Test: generate copy: DIF generated, GUARD check ...passed 00:07:16.922 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:16.922 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:16.922 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:16.922 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:16.922 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:16.922 Test: generate copy: iovecs-len validate ...[2024-07-15 23:50:51.235934] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:16.922 passed 00:07:16.922 Test: generate copy: buffer alignment validate ...passed 00:07:16.922 00:07:16.922 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.922 suites 1 1 n/a 0 0 00:07:16.922 tests 26 26 26 0 0 00:07:16.922 asserts 115 115 115 0 n/a 00:07:16.922 00:07:16.922 Elapsed time = 0.003 seconds 00:07:16.922 00:07:16.922 real 0m0.425s 00:07:16.922 user 0m0.604s 00:07:16.922 sys 0m0.157s 00:07:16.922 23:50:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.922 23:50:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:16.922 ************************************ 00:07:16.922 END TEST accel_dif_functional_tests 00:07:16.922 ************************************ 00:07:16.922 00:07:16.922 real 0m30.068s 00:07:16.922 user 0m33.574s 00:07:16.922 sys 0m4.150s 00:07:16.922 23:50:51 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.922 23:50:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.922 ************************************ 00:07:16.922 END TEST accel 00:07:16.922 ************************************ 00:07:17.180 23:50:51 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.180 23:50:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.180 23:50:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.180 23:50:51 -- common/autotest_common.sh@10 -- # set +x 00:07:17.180 ************************************ 00:07:17.180 START TEST accel_rpc 00:07:17.180 ************************************ 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:17.180 * Looking for test storage... 00:07:17.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:17.180 23:50:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.180 23:50:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1170919 00:07:17.180 23:50:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1170919 00:07:17.180 23:50:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1170919 ']' 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.180 23:50:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.180 [2024-07-15 23:50:51.572534] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:17.180 [2024-07-15 23:50:51.572642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170919 ] 00:07:17.180 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.180 [2024-07-15 23:50:51.632205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.439 [2024-07-15 23:50:51.719553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.439 23:50:51 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.439 23:50:51 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:17.439 23:50:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:17.439 23:50:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:17.439 23:50:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:17.439 23:50:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:17.439 23:50:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:17.439 23:50:51 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.439 23:50:51 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.439 23:50:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.439 ************************************ 00:07:17.439 START TEST accel_assign_opcode 00:07:17.439 ************************************ 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:17.439 [2024-07-15 23:50:51.844355] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:17.439 [2024-07-15 23:50:51.852343] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.439 23:50:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.698 software 00:07:17.698 00:07:17.698 real 0m0.263s 00:07:17.698 user 0m0.045s 00:07:17.698 sys 0m0.005s 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.698 23:50:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:17.698 ************************************ 00:07:17.698 END TEST accel_assign_opcode 00:07:17.698 ************************************ 00:07:17.698 23:50:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1170919 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1170919 ']' 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1170919 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1170919 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1170919' 00:07:17.698 killing process with pid 1170919 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@965 -- # kill 1170919 00:07:17.698 23:50:52 accel_rpc -- common/autotest_common.sh@970 -- # wait 1170919 00:07:17.958 00:07:17.958 real 0m0.950s 00:07:17.958 user 0m0.958s 00:07:17.958 sys 0m0.396s 00:07:17.958 23:50:52 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.958 23:50:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.958 ************************************ 00:07:17.958 END TEST accel_rpc 00:07:17.958 ************************************ 00:07:17.958 23:50:52 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:17.958 23:50:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:17.958 23:50:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.958 23:50:52 -- common/autotest_common.sh@10 -- # set +x 00:07:17.958 ************************************ 00:07:17.958 START TEST app_cmdline 00:07:17.958 ************************************ 00:07:17.958 23:50:52 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:18.216 * Looking for test storage... 00:07:18.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:18.216 23:50:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:18.216 23:50:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1171091 00:07:18.216 23:50:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1171091 00:07:18.216 23:50:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1171091 ']' 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.216 23:50:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.216 [2024-07-15 23:50:52.572317] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:18.216 [2024-07-15 23:50:52.572408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1171091 ] 00:07:18.216 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.216 [2024-07-15 23:50:52.631928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.216 [2024-07-15 23:50:52.719305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.503 23:50:52 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.503 23:50:52 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:18.503 23:50:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:18.762 { 00:07:18.762 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:18.762 "fields": { 00:07:18.762 "major": 24, 00:07:18.762 "minor": 5, 00:07:18.762 "patch": 1, 00:07:18.762 "suffix": "-pre", 00:07:18.762 "commit": "5fa2f5086" 00:07:18.762 } 00:07:18.762 } 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:18.762 23:50:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.762 23:50:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.018 23:50:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.018 23:50:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.018 23:50:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.018 23:50:53 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:19.275 request: 00:07:19.275 { 00:07:19.275 "method": "env_dpdk_get_mem_stats", 00:07:19.275 "req_id": 1 00:07:19.275 } 00:07:19.275 Got JSON-RPC error response 00:07:19.275 response: 00:07:19.275 { 00:07:19.276 "code": -32601, 00:07:19.276 "message": "Method not found" 00:07:19.276 } 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.276 23:50:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1171091 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1171091 ']' 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1171091 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1171091 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1171091' 00:07:19.276 killing process with pid 1171091 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@965 -- # kill 1171091 00:07:19.276 23:50:53 app_cmdline -- common/autotest_common.sh@970 -- # wait 1171091 00:07:19.534 00:07:19.534 real 0m1.405s 00:07:19.534 user 0m1.892s 00:07:19.534 sys 0m0.432s 00:07:19.534 23:50:53 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.534 23:50:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.534 ************************************ 00:07:19.534 END TEST app_cmdline 00:07:19.534 ************************************ 00:07:19.534 23:50:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:19.534 23:50:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:19.534 23:50:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.534 23:50:53 -- common/autotest_common.sh@10 -- # set +x 00:07:19.534 ************************************ 00:07:19.534 START TEST version 00:07:19.534 ************************************ 00:07:19.534 23:50:53 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:19.534 * Looking for test storage... 00:07:19.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:19.534 23:50:53 version -- app/version.sh@17 -- # get_header_version major 00:07:19.534 23:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # cut -f2 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.534 23:50:53 version -- app/version.sh@17 -- # major=24 00:07:19.534 23:50:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:19.534 23:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # cut -f2 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.534 23:50:53 version -- app/version.sh@18 -- # minor=5 00:07:19.534 23:50:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:19.534 23:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # cut -f2 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.534 23:50:53 version -- app/version.sh@19 -- # patch=1 00:07:19.534 23:50:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:19.534 23:50:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # cut -f2 00:07:19.534 23:50:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:19.534 23:50:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:19.534 23:50:53 version -- app/version.sh@22 -- # version=24.5 00:07:19.534 23:50:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:19.534 23:50:53 version -- app/version.sh@25 -- # version=24.5.1 00:07:19.534 23:50:53 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:19.534 23:50:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:19.534 23:50:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:19.534 23:50:54 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:19.534 23:50:54 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:19.534 00:07:19.534 real 0m0.110s 00:07:19.534 user 0m0.057s 00:07:19.535 sys 0m0.074s 00:07:19.535 23:50:54 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.535 23:50:54 version -- common/autotest_common.sh@10 -- # set +x 00:07:19.535 ************************************ 00:07:19.535 END TEST version 00:07:19.535 ************************************ 00:07:19.793 23:50:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@198 -- # uname -s 00:07:19.793 23:50:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:19.793 23:50:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:19.793 23:50:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:19.793 23:50:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:19.793 23:50:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.793 23:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.793 23:50:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:19.793 23:50:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:19.793 23:50:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:19.793 23:50:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.793 23:50:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.793 23:50:54 -- common/autotest_common.sh@10 -- # set +x 00:07:19.793 ************************************ 00:07:19.793 START TEST nvmf_tcp 00:07:19.793 ************************************ 00:07:19.793 23:50:54 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:19.793 * Looking for test storage... 00:07:19.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.793 23:50:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.793 23:50:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.793 23:50:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.793 23:50:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.793 23:50:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.793 23:50:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.793 23:50:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:19.793 23:50:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:19.793 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:19.793 23:50:54 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:19.794 23:50:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.794 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:19.794 23:50:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:19.794 23:50:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:19.794 23:50:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.794 23:50:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.794 ************************************ 00:07:19.794 START TEST nvmf_example 00:07:19.794 ************************************ 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:19.794 * Looking for test storage... 00:07:19.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.794 23:50:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:21.701 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:21.701 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:21.701 Found net devices under 0000:08:00.0: cvl_0_0 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.701 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:21.702 Found net devices under 0000:08:00.1: cvl_0_1 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:07:21.702 00:07:21.702 --- 10.0.0.2 ping statistics --- 00:07:21.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.702 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:07:21.702 00:07:21.702 --- 10.0.0.1 ping statistics --- 00:07:21.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.702 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1172591 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1172591 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1172591 ']' 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.702 23:50:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.702 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:21.960 23:50:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:21.960 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.153 Initializing NVMe Controllers 00:07:34.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:34.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:34.153 Initialization complete. Launching workers. 00:07:34.153 ======================================================== 00:07:34.153 Latency(us) 00:07:34.153 Device Information : IOPS MiB/s Average min max 00:07:34.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13765.00 53.77 4649.91 1036.35 15249.89 00:07:34.153 ======================================================== 00:07:34.153 Total : 13765.00 53.77 4649.91 1036.35 15249.89 00:07:34.153 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.153 rmmod nvme_tcp 00:07:34.153 rmmod nvme_fabrics 00:07:34.153 rmmod nvme_keyring 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1172591 ']' 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1172591 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1172591 ']' 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1172591 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1172591 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:34.153 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1172591' 00:07:34.154 killing process with pid 1172591 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1172591 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1172591 00:07:34.154 nvmf threads initialize successfully 00:07:34.154 bdev subsystem init successfully 00:07:34.154 created a nvmf target service 00:07:34.154 create targets's poll groups done 00:07:34.154 all subsystems of target started 00:07:34.154 nvmf target is running 00:07:34.154 all subsystems of target stopped 00:07:34.154 destroy targets's poll groups done 00:07:34.154 destroyed the nvmf target service 00:07:34.154 bdev subsystem finish successfully 00:07:34.154 nvmf threads destroy successfully 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.154 23:51:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.723 23:51:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.723 23:51:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:34.723 23:51:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.723 23:51:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.723 00:07:34.723 real 0m14.793s 00:07:34.723 user 0m42.132s 00:07:34.723 sys 0m2.922s 00:07:34.723 23:51:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.724 23:51:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.724 ************************************ 00:07:34.724 END TEST nvmf_example 00:07:34.724 ************************************ 00:07:34.724 23:51:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.724 23:51:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:34.724 23:51:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.724 23:51:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.724 ************************************ 00:07:34.724 START TEST nvmf_filesystem 00:07:34.724 ************************************ 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.724 * Looking for test storage... 00:07:34.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:34.724 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:34.724 #define SPDK_CONFIG_H 00:07:34.724 #define SPDK_CONFIG_APPS 1 00:07:34.724 #define SPDK_CONFIG_ARCH native 00:07:34.724 #undef SPDK_CONFIG_ASAN 00:07:34.724 #undef SPDK_CONFIG_AVAHI 00:07:34.724 #undef SPDK_CONFIG_CET 00:07:34.724 #define SPDK_CONFIG_COVERAGE 1 00:07:34.725 #define SPDK_CONFIG_CROSS_PREFIX 00:07:34.725 #undef SPDK_CONFIG_CRYPTO 00:07:34.725 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:34.725 #undef SPDK_CONFIG_CUSTOMOCF 00:07:34.725 #undef SPDK_CONFIG_DAOS 00:07:34.725 #define SPDK_CONFIG_DAOS_DIR 00:07:34.725 #define SPDK_CONFIG_DEBUG 1 00:07:34.725 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:34.725 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.725 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:34.725 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.725 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:34.725 #undef SPDK_CONFIG_DPDK_UADK 00:07:34.725 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.725 #define SPDK_CONFIG_EXAMPLES 1 00:07:34.725 #undef SPDK_CONFIG_FC 00:07:34.725 #define SPDK_CONFIG_FC_PATH 00:07:34.725 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:34.725 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:34.725 #undef SPDK_CONFIG_FUSE 00:07:34.725 #undef SPDK_CONFIG_FUZZER 00:07:34.725 #define SPDK_CONFIG_FUZZER_LIB 00:07:34.725 #undef SPDK_CONFIG_GOLANG 00:07:34.725 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:34.725 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:34.725 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:34.725 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:34.725 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:34.725 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:34.725 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:34.725 #define SPDK_CONFIG_IDXD 1 00:07:34.725 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:34.725 #undef SPDK_CONFIG_IPSEC_MB 00:07:34.725 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:34.725 #define SPDK_CONFIG_ISAL 1 00:07:34.725 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:34.725 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:34.725 #define SPDK_CONFIG_LIBDIR 00:07:34.725 #undef SPDK_CONFIG_LTO 00:07:34.725 #define SPDK_CONFIG_MAX_LCORES 00:07:34.725 #define SPDK_CONFIG_NVME_CUSE 1 00:07:34.725 #undef SPDK_CONFIG_OCF 00:07:34.725 #define SPDK_CONFIG_OCF_PATH 00:07:34.725 #define SPDK_CONFIG_OPENSSL_PATH 00:07:34.725 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:34.725 #define SPDK_CONFIG_PGO_DIR 00:07:34.725 #undef SPDK_CONFIG_PGO_USE 00:07:34.725 #define SPDK_CONFIG_PREFIX /usr/local 00:07:34.725 #undef SPDK_CONFIG_RAID5F 00:07:34.725 #undef SPDK_CONFIG_RBD 00:07:34.725 #define SPDK_CONFIG_RDMA 1 00:07:34.725 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:34.725 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:34.725 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:34.725 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:34.725 #define SPDK_CONFIG_SHARED 1 00:07:34.725 #undef SPDK_CONFIG_SMA 00:07:34.725 #define SPDK_CONFIG_TESTS 1 00:07:34.725 #undef SPDK_CONFIG_TSAN 00:07:34.725 #define SPDK_CONFIG_UBLK 1 00:07:34.725 #define SPDK_CONFIG_UBSAN 1 00:07:34.725 #undef SPDK_CONFIG_UNIT_TESTS 00:07:34.725 #undef SPDK_CONFIG_URING 00:07:34.725 #define SPDK_CONFIG_URING_PATH 00:07:34.725 #undef SPDK_CONFIG_URING_ZNS 00:07:34.725 #undef SPDK_CONFIG_USDT 00:07:34.725 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:34.725 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:34.725 #define SPDK_CONFIG_VFIO_USER 1 00:07:34.725 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:34.725 #define SPDK_CONFIG_VHOST 1 00:07:34.725 #define SPDK_CONFIG_VIRTIO 1 00:07:34.725 #undef SPDK_CONFIG_VTUNE 00:07:34.725 #define SPDK_CONFIG_VTUNE_DIR 00:07:34.725 #define SPDK_CONFIG_WERROR 1 00:07:34.725 #define SPDK_CONFIG_WPDK_DIR 00:07:34.725 #undef SPDK_CONFIG_XNVME 00:07:34.725 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:34.725 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:34.726 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j32 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1174281 ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1174281 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.m1Vp1E 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.m1Vp1E/tests/target /tmp/spdk.m1Vp1E 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1957711872 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3326717952 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=41834844160 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=53546168320 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=11711324160 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26768371712 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773082112 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=10700750848 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=10709233664 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8482816 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26772729856 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773086208 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=356352 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=5354610688 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5354614784 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:34.727 * Looking for test storage... 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=41834844160 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=13925916672 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:34.727 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.728 23:51:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.631 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.631 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.631 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.631 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.631 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:36.632 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:36.632 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:36.632 Found net devices under 0000:08:00.0: cvl_0_0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:36.632 Found net devices under 0000:08:00.1: cvl_0_1 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:07:36.632 00:07:36.632 --- 10.0.0.2 ping statistics --- 00:07:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.632 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:07:36.632 00:07:36.632 --- 10.0.0.1 ping statistics --- 00:07:36.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.632 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.632 ************************************ 00:07:36.632 START TEST nvmf_filesystem_no_in_capsule 00:07:36.632 ************************************ 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1175773 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1175773 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1175773 ']' 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.632 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.633 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.633 23:51:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.633 [2024-07-15 23:51:11.015681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:36.633 [2024-07-15 23:51:11.015779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.633 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.633 [2024-07-15 23:51:11.082829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:36.892 [2024-07-15 23:51:11.175261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.892 [2024-07-15 23:51:11.175319] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.892 [2024-07-15 23:51:11.175335] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.892 [2024-07-15 23:51:11.175348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.892 [2024-07-15 23:51:11.175360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.892 [2024-07-15 23:51:11.175443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.892 [2024-07-15 23:51:11.175497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.892 [2024-07-15 23:51:11.175731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.892 [2024-07-15 23:51:11.175734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.892 [2024-07-15 23:51:11.318780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.892 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 Malloc1 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.150 [2024-07-15 23:51:11.480385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.150 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:37.151 { 00:07:37.151 "name": "Malloc1", 00:07:37.151 "aliases": [ 00:07:37.151 "04f38b05-7fb5-4243-8439-1ef69c094148" 00:07:37.151 ], 00:07:37.151 "product_name": "Malloc disk", 00:07:37.151 "block_size": 512, 00:07:37.151 "num_blocks": 1048576, 00:07:37.151 "uuid": "04f38b05-7fb5-4243-8439-1ef69c094148", 00:07:37.151 "assigned_rate_limits": { 00:07:37.151 "rw_ios_per_sec": 0, 00:07:37.151 "rw_mbytes_per_sec": 0, 00:07:37.151 "r_mbytes_per_sec": 0, 00:07:37.151 "w_mbytes_per_sec": 0 00:07:37.151 }, 00:07:37.151 "claimed": true, 00:07:37.151 "claim_type": "exclusive_write", 00:07:37.151 "zoned": false, 00:07:37.151 "supported_io_types": { 00:07:37.151 "read": true, 00:07:37.151 "write": true, 00:07:37.151 "unmap": true, 00:07:37.151 "write_zeroes": true, 00:07:37.151 "flush": true, 00:07:37.151 "reset": true, 00:07:37.151 "compare": false, 00:07:37.151 "compare_and_write": false, 00:07:37.151 "abort": true, 00:07:37.151 "nvme_admin": false, 00:07:37.151 "nvme_io": false 00:07:37.151 }, 00:07:37.151 "memory_domains": [ 00:07:37.151 { 00:07:37.151 "dma_device_id": "system", 00:07:37.151 "dma_device_type": 1 00:07:37.151 }, 00:07:37.151 { 00:07:37.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.151 "dma_device_type": 2 00:07:37.151 } 00:07:37.151 ], 00:07:37.151 "driver_specific": {} 00:07:37.151 } 00:07:37.151 ]' 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:37.151 23:51:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.716 23:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.716 23:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:37.716 23:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.716 23:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:37.716 23:51:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:39.614 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.177 23:51:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.740 23:51:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.670 ************************************ 00:07:41.670 START TEST filesystem_ext4 00:07:41.670 ************************************ 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:41.670 23:51:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.670 mke2fs 1.46.5 (30-Dec-2021) 00:07:41.670 Discarding device blocks: 0/522240 done 00:07:41.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:41.670 Filesystem UUID: ff1656c7-a102-4683-bcad-1c2bb27e1e47 00:07:41.670 Superblock backups stored on blocks: 00:07:41.670 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:41.670 00:07:41.670 Allocating group tables: 0/64 done 00:07:41.670 Writing inode tables: 0/64 done 00:07:41.927 Creating journal (8192 blocks): done 00:07:42.888 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:42.888 00:07:42.888 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:42.888 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.145 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.145 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:43.145 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.145 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1175773 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.146 00:07:43.146 real 0m1.478s 00:07:43.146 user 0m0.006s 00:07:43.146 sys 0m0.062s 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:43.146 ************************************ 00:07:43.146 END TEST filesystem_ext4 00:07:43.146 ************************************ 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.146 ************************************ 00:07:43.146 START TEST filesystem_btrfs 00:07:43.146 ************************************ 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:43.146 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.403 btrfs-progs v6.6.2 00:07:43.403 See https://btrfs.readthedocs.io for more information. 00:07:43.403 00:07:43.403 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.403 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.403 this does not affect your deployments: 00:07:43.403 - DUP for metadata (-m dup) 00:07:43.403 - enabled no-holes (-O no-holes) 00:07:43.403 - enabled free-space-tree (-R free-space-tree) 00:07:43.403 00:07:43.403 Label: (null) 00:07:43.403 UUID: 538843b3-8467-4b86-999f-644241f5dff1 00:07:43.403 Node size: 16384 00:07:43.403 Sector size: 4096 00:07:43.403 Filesystem size: 510.00MiB 00:07:43.403 Block group profiles: 00:07:43.403 Data: single 8.00MiB 00:07:43.403 Metadata: DUP 32.00MiB 00:07:43.403 System: DUP 8.00MiB 00:07:43.403 SSD detected: yes 00:07:43.403 Zoned device: no 00:07:43.403 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.403 Runtime features: free-space-tree 00:07:43.403 Checksum: crc32c 00:07:43.403 Number of devices: 1 00:07:43.403 Devices: 00:07:43.403 ID SIZE PATH 00:07:43.403 1 510.00MiB /dev/nvme0n1p1 00:07:43.403 00:07:43.403 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:43.403 23:51:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1175773 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.333 00:07:44.333 real 0m1.068s 00:07:44.333 user 0m0.025s 00:07:44.333 sys 0m0.112s 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.333 ************************************ 00:07:44.333 END TEST filesystem_btrfs 00:07:44.333 ************************************ 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.333 ************************************ 00:07:44.333 START TEST filesystem_xfs 00:07:44.333 ************************************ 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:44.333 23:51:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.333 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.333 = sectsz=512 attr=2, projid32bit=1 00:07:44.333 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.333 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.333 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.333 = sunit=0 swidth=0 blks 00:07:44.333 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.333 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.333 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.333 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.263 Discarding blocks...Done. 00:07:45.263 23:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:45.264 23:51:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1175773 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.163 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.164 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.164 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.164 00:07:47.164 real 0m2.918s 00:07:47.164 user 0m0.018s 00:07:47.164 sys 0m0.062s 00:07:47.164 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.164 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.164 ************************************ 00:07:47.164 END TEST filesystem_xfs 00:07:47.164 ************************************ 00:07:47.164 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.420 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:47.420 23:51:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:47.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1175773 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1175773 ']' 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1175773 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1175773 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1175773' 00:07:47.678 killing process with pid 1175773 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1175773 00:07:47.678 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1175773 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:47.936 00:07:47.936 real 0m11.404s 00:07:47.936 user 0m43.788s 00:07:47.936 sys 0m1.726s 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 ************************************ 00:07:47.936 END TEST nvmf_filesystem_no_in_capsule 00:07:47.936 ************************************ 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 ************************************ 00:07:47.936 START TEST nvmf_filesystem_in_capsule 00:07:47.936 ************************************ 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1177784 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1177784 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1177784 ']' 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:47.936 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.194 [2024-07-15 23:51:22.455282] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:48.194 [2024-07-15 23:51:22.455368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.194 [2024-07-15 23:51:22.522428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.194 [2024-07-15 23:51:22.613053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.194 [2024-07-15 23:51:22.613108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.194 [2024-07-15 23:51:22.613124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.194 [2024-07-15 23:51:22.613144] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.194 [2024-07-15 23:51:22.613158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.194 [2024-07-15 23:51:22.613231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.194 [2024-07-15 23:51:22.613294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.194 [2024-07-15 23:51:22.613379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.194 [2024-07-15 23:51:22.613411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 [2024-07-15 23:51:22.759803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 [2024-07-15 23:51:22.921365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:48.452 { 00:07:48.452 "name": "Malloc1", 00:07:48.452 "aliases": [ 00:07:48.452 "5294d7a1-c87e-43a2-ba7f-43bd1c83fefb" 00:07:48.452 ], 00:07:48.452 "product_name": "Malloc disk", 00:07:48.452 "block_size": 512, 00:07:48.452 "num_blocks": 1048576, 00:07:48.452 "uuid": "5294d7a1-c87e-43a2-ba7f-43bd1c83fefb", 00:07:48.452 "assigned_rate_limits": { 00:07:48.452 "rw_ios_per_sec": 0, 00:07:48.452 "rw_mbytes_per_sec": 0, 00:07:48.452 "r_mbytes_per_sec": 0, 00:07:48.452 "w_mbytes_per_sec": 0 00:07:48.452 }, 00:07:48.452 "claimed": true, 00:07:48.452 "claim_type": "exclusive_write", 00:07:48.452 "zoned": false, 00:07:48.452 "supported_io_types": { 00:07:48.452 "read": true, 00:07:48.452 "write": true, 00:07:48.452 "unmap": true, 00:07:48.452 "write_zeroes": true, 00:07:48.452 "flush": true, 00:07:48.452 "reset": true, 00:07:48.452 "compare": false, 00:07:48.452 "compare_and_write": false, 00:07:48.452 "abort": true, 00:07:48.452 "nvme_admin": false, 00:07:48.452 "nvme_io": false 00:07:48.452 }, 00:07:48.452 "memory_domains": [ 00:07:48.452 { 00:07:48.452 "dma_device_id": "system", 00:07:48.452 "dma_device_type": 1 00:07:48.452 }, 00:07:48.452 { 00:07:48.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.452 "dma_device_type": 2 00:07:48.452 } 00:07:48.452 ], 00:07:48.452 "driver_specific": {} 00:07:48.452 } 00:07:48.452 ]' 00:07:48.452 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:48.710 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:48.710 23:51:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:48.710 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:48.710 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:48.710 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:48.710 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:48.710 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.274 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.274 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:49.274 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.274 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:49.274 23:51:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:51.170 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:51.171 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:51.171 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:51.171 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:51.428 23:51:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.364 23:51:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.297 ************************************ 00:07:53.297 START TEST filesystem_in_capsule_ext4 00:07:53.297 ************************************ 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:53.297 23:51:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:53.297 mke2fs 1.46.5 (30-Dec-2021) 00:07:53.297 Discarding device blocks: 0/522240 done 00:07:53.297 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:53.297 Filesystem UUID: 11044528-d40e-41d2-bc16-7e8c17daa71e 00:07:53.297 Superblock backups stored on blocks: 00:07:53.297 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:53.297 00:07:53.297 Allocating group tables: 0/64 done 00:07:53.297 Writing inode tables: 0/64 done 00:07:53.554 Creating journal (8192 blocks): done 00:07:54.377 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:54.377 00:07:54.377 23:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:54.377 23:51:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1177784 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.635 00:07:54.635 real 0m1.570s 00:07:54.635 user 0m0.026s 00:07:54.635 sys 0m0.053s 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 ************************************ 00:07:54.635 END TEST filesystem_in_capsule_ext4 00:07:54.635 ************************************ 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.635 ************************************ 00:07:54.635 START TEST filesystem_in_capsule_btrfs 00:07:54.635 ************************************ 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:54.635 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.893 btrfs-progs v6.6.2 00:07:54.893 See https://btrfs.readthedocs.io for more information. 00:07:54.893 00:07:54.893 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.893 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.893 this does not affect your deployments: 00:07:54.893 - DUP for metadata (-m dup) 00:07:54.893 - enabled no-holes (-O no-holes) 00:07:54.893 - enabled free-space-tree (-R free-space-tree) 00:07:54.893 00:07:54.893 Label: (null) 00:07:54.893 UUID: dd279fcf-2f5a-438a-804d-3c0c631dbf05 00:07:54.893 Node size: 16384 00:07:54.893 Sector size: 4096 00:07:54.893 Filesystem size: 510.00MiB 00:07:54.893 Block group profiles: 00:07:54.893 Data: single 8.00MiB 00:07:54.893 Metadata: DUP 32.00MiB 00:07:54.893 System: DUP 8.00MiB 00:07:54.893 SSD detected: yes 00:07:54.893 Zoned device: no 00:07:54.893 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.893 Runtime features: free-space-tree 00:07:54.893 Checksum: crc32c 00:07:54.893 Number of devices: 1 00:07:54.893 Devices: 00:07:54.893 ID SIZE PATH 00:07:54.893 1 510.00MiB /dev/nvme0n1p1 00:07:54.893 00:07:54.893 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:54.893 23:51:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1177784 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.266 00:07:56.266 real 0m1.294s 00:07:56.266 user 0m0.009s 00:07:56.266 sys 0m0.122s 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.266 ************************************ 00:07:56.266 END TEST filesystem_in_capsule_btrfs 00:07:56.266 ************************************ 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.266 ************************************ 00:07:56.266 START TEST filesystem_in_capsule_xfs 00:07:56.266 ************************************ 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:56.266 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.267 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:56.267 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:56.267 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:56.267 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:56.267 23:51:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:56.267 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:56.267 = sectsz=512 attr=2, projid32bit=1 00:07:56.267 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:56.267 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:56.267 data = bsize=4096 blocks=130560, imaxpct=25 00:07:56.267 = sunit=0 swidth=0 blks 00:07:56.267 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:56.267 log =internal log bsize=4096 blocks=16384, version=2 00:07:56.267 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:56.267 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:56.832 Discarding blocks...Done. 00:07:56.832 23:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:56.832 23:51:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1177784 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.357 00:07:59.357 real 0m3.333s 00:07:59.357 user 0m0.017s 00:07:59.357 sys 0m0.056s 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.357 ************************************ 00:07:59.357 END TEST filesystem_in_capsule_xfs 00:07:59.357 ************************************ 00:07:59.357 23:51:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.614 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:59.614 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1177784 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1177784 ']' 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1177784 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1177784 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1177784' 00:07:59.873 killing process with pid 1177784 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1177784 00:07:59.873 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1177784 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.136 00:08:00.136 real 0m12.112s 00:08:00.136 user 0m46.549s 00:08:00.136 sys 0m1.779s 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.136 ************************************ 00:08:00.136 END TEST nvmf_filesystem_in_capsule 00:08:00.136 ************************************ 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:00.136 rmmod nvme_tcp 00:08:00.136 rmmod nvme_fabrics 00:08:00.136 rmmod nvme_keyring 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.136 23:51:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.707 23:51:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:02.707 00:08:02.707 real 0m27.594s 00:08:02.707 user 1m31.072s 00:08:02.707 sys 0m4.838s 00:08:02.707 23:51:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.707 23:51:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.707 ************************************ 00:08:02.707 END TEST nvmf_filesystem 00:08:02.707 ************************************ 00:08:02.707 23:51:36 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:02.707 23:51:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:02.707 23:51:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:02.707 23:51:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:02.707 ************************************ 00:08:02.707 START TEST nvmf_target_discovery 00:08:02.707 ************************************ 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:02.707 * Looking for test storage... 00:08:02.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:02.707 23:51:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:02.708 23:51:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.087 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:04.088 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:04.088 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:04.088 Found net devices under 0000:08:00.0: cvl_0_0 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:04.088 Found net devices under 0000:08:00.1: cvl_0_1 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:04.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:08:04.088 00:08:04.088 --- 10.0.0.2 ping statistics --- 00:08:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.088 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:04.088 00:08:04.088 --- 10.0.0.1 ping statistics --- 00:08:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.088 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1180515 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1180515 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1180515 ']' 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.088 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.088 [2024-07-15 23:51:38.506871] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:04.088 [2024-07-15 23:51:38.506960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.088 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.088 [2024-07-15 23:51:38.570560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.347 [2024-07-15 23:51:38.657936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.347 [2024-07-15 23:51:38.657991] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.347 [2024-07-15 23:51:38.658007] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.347 [2024-07-15 23:51:38.658020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.347 [2024-07-15 23:51:38.658032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.347 [2024-07-15 23:51:38.658116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.347 [2024-07-15 23:51:38.658199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.347 [2024-07-15 23:51:38.658201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.347 [2024-07-15 23:51:38.658170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 [2024-07-15 23:51:38.805782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 Null1 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 [2024-07-15 23:51:38.846067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.347 Null2 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.347 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 Null3 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 Null4 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.606 23:51:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:08:04.606 00:08:04.606 Discovery Log Number of Records 6, Generation counter 6 00:08:04.606 =====Discovery Log Entry 0====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: current discovery subsystem 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4420 00:08:04.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: explicit discovery connections, duplicate discovery information 00:08:04.606 sectype: none 00:08:04.606 =====Discovery Log Entry 1====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: nvme subsystem 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4420 00:08:04.606 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: none 00:08:04.606 sectype: none 00:08:04.606 =====Discovery Log Entry 2====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: nvme subsystem 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4420 00:08:04.606 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: none 00:08:04.606 sectype: none 00:08:04.606 =====Discovery Log Entry 3====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: nvme subsystem 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4420 00:08:04.606 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: none 00:08:04.606 sectype: none 00:08:04.606 =====Discovery Log Entry 4====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: nvme subsystem 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4420 00:08:04.606 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: none 00:08:04.606 sectype: none 00:08:04.606 =====Discovery Log Entry 5====== 00:08:04.606 trtype: tcp 00:08:04.606 adrfam: ipv4 00:08:04.606 subtype: discovery subsystem referral 00:08:04.606 treq: not required 00:08:04.606 portid: 0 00:08:04.606 trsvcid: 4430 00:08:04.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:04.606 traddr: 10.0.0.2 00:08:04.606 eflags: none 00:08:04.606 sectype: none 00:08:04.606 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:04.606 Perform nvmf subsystem discovery via RPC 00:08:04.606 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:04.606 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.606 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 [ 00:08:04.606 { 00:08:04.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:04.606 "subtype": "Discovery", 00:08:04.606 "listen_addresses": [ 00:08:04.606 { 00:08:04.606 "trtype": "TCP", 00:08:04.606 "adrfam": "IPv4", 00:08:04.606 "traddr": "10.0.0.2", 00:08:04.606 "trsvcid": "4420" 00:08:04.606 } 00:08:04.606 ], 00:08:04.606 "allow_any_host": true, 00:08:04.606 "hosts": [] 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.606 "subtype": "NVMe", 00:08:04.606 "listen_addresses": [ 00:08:04.606 { 00:08:04.606 "trtype": "TCP", 00:08:04.606 "adrfam": "IPv4", 00:08:04.606 "traddr": "10.0.0.2", 00:08:04.606 "trsvcid": "4420" 00:08:04.606 } 00:08:04.606 ], 00:08:04.606 "allow_any_host": true, 00:08:04.606 "hosts": [], 00:08:04.606 "serial_number": "SPDK00000000000001", 00:08:04.606 "model_number": "SPDK bdev Controller", 00:08:04.606 "max_namespaces": 32, 00:08:04.606 "min_cntlid": 1, 00:08:04.606 "max_cntlid": 65519, 00:08:04.606 "namespaces": [ 00:08:04.606 { 00:08:04.606 "nsid": 1, 00:08:04.606 "bdev_name": "Null1", 00:08:04.606 "name": "Null1", 00:08:04.606 "nguid": "2112022DA230482BAF19A8C7BEC41F35", 00:08:04.606 "uuid": "2112022d-a230-482b-af19-a8c7bec41f35" 00:08:04.606 } 00:08:04.606 ] 00:08:04.606 }, 00:08:04.606 { 00:08:04.606 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:04.606 "subtype": "NVMe", 00:08:04.606 "listen_addresses": [ 00:08:04.606 { 00:08:04.606 "trtype": "TCP", 00:08:04.606 "adrfam": "IPv4", 00:08:04.606 "traddr": "10.0.0.2", 00:08:04.606 "trsvcid": "4420" 00:08:04.606 } 00:08:04.606 ], 00:08:04.606 "allow_any_host": true, 00:08:04.606 "hosts": [], 00:08:04.606 "serial_number": "SPDK00000000000002", 00:08:04.606 "model_number": "SPDK bdev Controller", 00:08:04.606 "max_namespaces": 32, 00:08:04.606 "min_cntlid": 1, 00:08:04.606 "max_cntlid": 65519, 00:08:04.606 "namespaces": [ 00:08:04.606 { 00:08:04.606 "nsid": 1, 00:08:04.606 "bdev_name": "Null2", 00:08:04.606 "name": "Null2", 00:08:04.606 "nguid": "8CCD5AFFD99143049377258FB8B72A16", 00:08:04.606 "uuid": "8ccd5aff-d991-4304-9377-258fb8b72a16" 00:08:04.606 } 00:08:04.606 ] 00:08:04.606 }, 00:08:04.606 { 00:08:04.607 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:04.607 "subtype": "NVMe", 00:08:04.607 "listen_addresses": [ 00:08:04.607 { 00:08:04.607 "trtype": "TCP", 00:08:04.607 "adrfam": "IPv4", 00:08:04.607 "traddr": "10.0.0.2", 00:08:04.607 "trsvcid": "4420" 00:08:04.607 } 00:08:04.607 ], 00:08:04.607 "allow_any_host": true, 00:08:04.607 "hosts": [], 00:08:04.607 "serial_number": "SPDK00000000000003", 00:08:04.607 "model_number": "SPDK bdev Controller", 00:08:04.607 "max_namespaces": 32, 00:08:04.607 "min_cntlid": 1, 00:08:04.607 "max_cntlid": 65519, 00:08:04.607 "namespaces": [ 00:08:04.607 { 00:08:04.607 "nsid": 1, 00:08:04.607 "bdev_name": "Null3", 00:08:04.607 "name": "Null3", 00:08:04.607 "nguid": "A2AD5F8998FD444BA67F0CF8FBD824B9", 00:08:04.607 "uuid": "a2ad5f89-98fd-444b-a67f-0cf8fbd824b9" 00:08:04.607 } 00:08:04.607 ] 00:08:04.607 }, 00:08:04.607 { 00:08:04.607 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:04.607 "subtype": "NVMe", 00:08:04.607 "listen_addresses": [ 00:08:04.607 { 00:08:04.607 "trtype": "TCP", 00:08:04.607 "adrfam": "IPv4", 00:08:04.607 "traddr": "10.0.0.2", 00:08:04.607 "trsvcid": "4420" 00:08:04.607 } 00:08:04.607 ], 00:08:04.607 "allow_any_host": true, 00:08:04.607 "hosts": [], 00:08:04.607 "serial_number": "SPDK00000000000004", 00:08:04.607 "model_number": "SPDK bdev Controller", 00:08:04.607 "max_namespaces": 32, 00:08:04.607 "min_cntlid": 1, 00:08:04.607 "max_cntlid": 65519, 00:08:04.607 "namespaces": [ 00:08:04.607 { 00:08:04.607 "nsid": 1, 00:08:04.607 "bdev_name": "Null4", 00:08:04.607 "name": "Null4", 00:08:04.607 "nguid": "FE944897FAB5451684224B8D8A59AE42", 00:08:04.607 "uuid": "fe944897-fab5-4516-8422-4b8d8a59ae42" 00:08:04.607 } 00:08:04.607 ] 00:08:04.607 } 00:08:04.607 ] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:04.607 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.866 rmmod nvme_tcp 00:08:04.866 rmmod nvme_fabrics 00:08:04.866 rmmod nvme_keyring 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:04.866 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1180515 ']' 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1180515 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1180515 ']' 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1180515 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1180515 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1180515' 00:08:04.867 killing process with pid 1180515 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1180515 00:08:04.867 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1180515 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.127 23:51:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.037 23:51:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.037 00:08:07.037 real 0m4.820s 00:08:07.037 user 0m3.864s 00:08:07.037 sys 0m1.500s 00:08:07.037 23:51:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.037 23:51:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 ************************************ 00:08:07.037 END TEST nvmf_target_discovery 00:08:07.037 ************************************ 00:08:07.037 23:51:41 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:07.037 23:51:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:07.037 23:51:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.037 23:51:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 ************************************ 00:08:07.037 START TEST nvmf_referrals 00:08:07.037 ************************************ 00:08:07.037 23:51:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:07.297 * Looking for test storage... 00:08:07.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.297 23:51:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:09.196 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:09.196 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:09.196 Found net devices under 0000:08:00.0: cvl_0_0 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:09.196 Found net devices under 0000:08:00.1: cvl_0_1 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:08:09.196 00:08:09.196 --- 10.0.0.2 ping statistics --- 00:08:09.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.196 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:09.196 00:08:09.196 --- 10.0.0.1 ping statistics --- 00:08:09.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.196 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.196 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1182026 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1182026 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1182026 ']' 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:09.197 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.197 [2024-07-15 23:51:43.438972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.197 [2024-07-15 23:51:43.439071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.197 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.197 [2024-07-15 23:51:43.505063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.197 [2024-07-15 23:51:43.595934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.197 [2024-07-15 23:51:43.595984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.197 [2024-07-15 23:51:43.596000] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.197 [2024-07-15 23:51:43.596013] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.197 [2024-07-15 23:51:43.596025] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.197 [2024-07-15 23:51:43.596103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.197 [2024-07-15 23:51:43.596174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.197 [2024-07-15 23:51:43.596148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.197 [2024-07-15 23:51:43.596171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 [2024-07-15 23:51:43.753852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 [2024-07-15 23:51:43.766063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.453 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.454 23:51:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.710 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:09.711 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:09.966 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:09.967 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:10.223 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:10.478 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.479 23:51:44 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.735 rmmod nvme_tcp 00:08:10.735 rmmod nvme_fabrics 00:08:10.735 rmmod nvme_keyring 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1182026 ']' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1182026 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1182026 ']' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1182026 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1182026 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1182026' 00:08:10.735 killing process with pid 1182026 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1182026 00:08:10.735 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1182026 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.992 23:51:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.526 23:51:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.526 00:08:13.526 real 0m5.912s 00:08:13.526 user 0m8.502s 00:08:13.526 sys 0m1.890s 00:08:13.526 23:51:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.526 23:51:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:13.526 ************************************ 00:08:13.526 END TEST nvmf_referrals 00:08:13.526 ************************************ 00:08:13.526 23:51:47 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.526 23:51:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.526 23:51:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.526 23:51:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.526 ************************************ 00:08:13.526 START TEST nvmf_connect_disconnect 00:08:13.526 ************************************ 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:13.526 * Looking for test storage... 00:08:13.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.526 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.527 23:51:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:14.905 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:14.905 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:14.905 Found net devices under 0000:08:00.0: cvl_0_0 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:14.905 Found net devices under 0000:08:00.1: cvl_0_1 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:08:14.905 00:08:14.905 --- 10.0.0.2 ping statistics --- 00:08:14.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.905 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:08:14.905 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:14.905 00:08:14.906 --- 10.0.0.1 ping statistics --- 00:08:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.906 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1183710 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1183710 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1183710 ']' 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:14.906 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:14.906 [2024-07-15 23:51:49.324328] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:14.906 [2024-07-15 23:51:49.324431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.906 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.906 [2024-07-15 23:51:49.404316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.165 [2024-07-15 23:51:49.509954] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.165 [2024-07-15 23:51:49.510026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.165 [2024-07-15 23:51:49.510057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.165 [2024-07-15 23:51:49.510083] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.165 [2024-07-15 23:51:49.510105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.165 [2024-07-15 23:51:49.510194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.165 [2024-07-15 23:51:49.510254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.165 [2024-07-15 23:51:49.510315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.165 [2024-07-15 23:51:49.510324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.165 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.165 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:15.165 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.165 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.165 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 [2024-07-15 23:51:49.710302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.424 [2024-07-15 23:51:49.764628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:15.424 23:51:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:17.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.071 rmmod nvme_tcp 00:12:06.071 rmmod nvme_fabrics 00:12:06.071 rmmod nvme_keyring 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1183710 ']' 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1183710 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1183710 ']' 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1183710 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1183710 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1183710' 00:12:06.071 killing process with pid 1183710 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1183710 00:12:06.071 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1183710 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.331 23:55:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.237 23:55:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:08.237 00:12:08.237 real 3m55.247s 00:12:08.237 user 14m57.955s 00:12:08.237 sys 0m32.259s 00:12:08.237 23:55:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:08.237 23:55:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.237 ************************************ 00:12:08.237 END TEST nvmf_connect_disconnect 00:12:08.237 ************************************ 00:12:08.237 23:55:42 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:08.237 23:55:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:08.237 23:55:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:08.237 23:55:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:08.237 ************************************ 00:12:08.237 START TEST nvmf_multitarget 00:12:08.237 ************************************ 00:12:08.237 23:55:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:08.496 * Looking for test storage... 00:12:08.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:08.496 23:55:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:09.873 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:09.873 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:09.873 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:09.874 Found net devices under 0000:08:00.0: cvl_0_0 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.874 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:10.132 Found net devices under 0000:08:00.1: cvl_0_1 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.132 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:10.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:12:10.133 00:12:10.133 --- 10.0.0.2 ping statistics --- 00:12:10.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.133 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:10.133 00:12:10.133 --- 10.0.0.1 ping statistics --- 00:12:10.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.133 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1207423 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1207423 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1207423 ']' 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:10.133 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.133 [2024-07-15 23:55:44.584899] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:10.133 [2024-07-15 23:55:44.585003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.133 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.391 [2024-07-15 23:55:44.651385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.391 [2024-07-15 23:55:44.742527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.391 [2024-07-15 23:55:44.742587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.391 [2024-07-15 23:55:44.742604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.391 [2024-07-15 23:55:44.742617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.391 [2024-07-15 23:55:44.742629] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.391 [2024-07-15 23:55:44.745160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.391 [2024-07-15 23:55:44.745197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.391 [2024-07-15 23:55:44.745257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.391 [2024-07-15 23:55:44.745293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.391 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:10.391 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:10.391 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.392 23:55:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:10.649 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:10.649 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:10.649 "nvmf_tgt_1" 00:12:10.649 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:10.907 "nvmf_tgt_2" 00:12:10.907 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.907 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:10.907 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:10.907 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.164 true 00:12:11.164 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.164 true 00:12:11.164 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.164 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:11.422 rmmod nvme_tcp 00:12:11.422 rmmod nvme_fabrics 00:12:11.422 rmmod nvme_keyring 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1207423 ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1207423 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1207423 ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1207423 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1207423 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1207423' 00:12:11.422 killing process with pid 1207423 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1207423 00:12:11.422 23:55:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1207423 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.682 23:55:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.220 23:55:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.220 00:12:14.220 real 0m5.378s 00:12:14.220 user 0m6.717s 00:12:14.220 sys 0m1.659s 00:12:14.220 23:55:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:14.220 23:55:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:14.220 ************************************ 00:12:14.220 END TEST nvmf_multitarget 00:12:14.220 ************************************ 00:12:14.220 23:55:48 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.220 23:55:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:14.220 23:55:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.220 23:55:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.220 ************************************ 00:12:14.220 START TEST nvmf_rpc 00:12:14.220 ************************************ 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.220 * Looking for test storage... 00:12:14.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.220 23:55:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:15.598 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:15.598 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.598 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:15.599 Found net devices under 0000:08:00.0: cvl_0_0 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:15.599 Found net devices under 0000:08:00.1: cvl_0_1 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:12:15.599 00:12:15.599 --- 10.0.0.2 ping statistics --- 00:12:15.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.599 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:12:15.599 00:12:15.599 --- 10.0.0.1 ping statistics --- 00:12:15.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.599 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1209021 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1209021 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1209021 ']' 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.599 23:55:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.599 [2024-07-15 23:55:50.027284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:15.599 [2024-07-15 23:55:50.027374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.599 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.599 [2024-07-15 23:55:50.091775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.858 [2024-07-15 23:55:50.179315] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.858 [2024-07-15 23:55:50.179373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.858 [2024-07-15 23:55:50.179390] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.858 [2024-07-15 23:55:50.179403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.858 [2024-07-15 23:55:50.179415] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.858 [2024-07-15 23:55:50.179493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.858 [2024-07-15 23:55:50.179571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.858 [2024-07-15 23:55:50.179652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.858 [2024-07-15 23:55:50.179656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:15.858 "tick_rate": 2700000000, 00:12:15.858 "poll_groups": [ 00:12:15.858 { 00:12:15.858 "name": "nvmf_tgt_poll_group_000", 00:12:15.858 "admin_qpairs": 0, 00:12:15.858 "io_qpairs": 0, 00:12:15.858 "current_admin_qpairs": 0, 00:12:15.858 "current_io_qpairs": 0, 00:12:15.858 "pending_bdev_io": 0, 00:12:15.858 "completed_nvme_io": 0, 00:12:15.858 "transports": [] 00:12:15.858 }, 00:12:15.858 { 00:12:15.858 "name": "nvmf_tgt_poll_group_001", 00:12:15.858 "admin_qpairs": 0, 00:12:15.858 "io_qpairs": 0, 00:12:15.858 "current_admin_qpairs": 0, 00:12:15.858 "current_io_qpairs": 0, 00:12:15.858 "pending_bdev_io": 0, 00:12:15.858 "completed_nvme_io": 0, 00:12:15.858 "transports": [] 00:12:15.858 }, 00:12:15.858 { 00:12:15.858 "name": "nvmf_tgt_poll_group_002", 00:12:15.858 "admin_qpairs": 0, 00:12:15.858 "io_qpairs": 0, 00:12:15.858 "current_admin_qpairs": 0, 00:12:15.858 "current_io_qpairs": 0, 00:12:15.858 "pending_bdev_io": 0, 00:12:15.858 "completed_nvme_io": 0, 00:12:15.858 "transports": [] 00:12:15.858 }, 00:12:15.858 { 00:12:15.858 "name": "nvmf_tgt_poll_group_003", 00:12:15.858 "admin_qpairs": 0, 00:12:15.858 "io_qpairs": 0, 00:12:15.858 "current_admin_qpairs": 0, 00:12:15.858 "current_io_qpairs": 0, 00:12:15.858 "pending_bdev_io": 0, 00:12:15.858 "completed_nvme_io": 0, 00:12:15.858 "transports": [] 00:12:15.858 } 00:12:15.858 ] 00:12:15.858 }' 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:15.858 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.116 [2024-07-15 23:55:50.422125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.116 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:16.116 "tick_rate": 2700000000, 00:12:16.116 "poll_groups": [ 00:12:16.116 { 00:12:16.116 "name": "nvmf_tgt_poll_group_000", 00:12:16.116 "admin_qpairs": 0, 00:12:16.116 "io_qpairs": 0, 00:12:16.116 "current_admin_qpairs": 0, 00:12:16.116 "current_io_qpairs": 0, 00:12:16.116 "pending_bdev_io": 0, 00:12:16.116 "completed_nvme_io": 0, 00:12:16.116 "transports": [ 00:12:16.116 { 00:12:16.116 "trtype": "TCP" 00:12:16.116 } 00:12:16.116 ] 00:12:16.116 }, 00:12:16.116 { 00:12:16.116 "name": "nvmf_tgt_poll_group_001", 00:12:16.116 "admin_qpairs": 0, 00:12:16.116 "io_qpairs": 0, 00:12:16.116 "current_admin_qpairs": 0, 00:12:16.117 "current_io_qpairs": 0, 00:12:16.117 "pending_bdev_io": 0, 00:12:16.117 "completed_nvme_io": 0, 00:12:16.117 "transports": [ 00:12:16.117 { 00:12:16.117 "trtype": "TCP" 00:12:16.117 } 00:12:16.117 ] 00:12:16.117 }, 00:12:16.117 { 00:12:16.117 "name": "nvmf_tgt_poll_group_002", 00:12:16.117 "admin_qpairs": 0, 00:12:16.117 "io_qpairs": 0, 00:12:16.117 "current_admin_qpairs": 0, 00:12:16.117 "current_io_qpairs": 0, 00:12:16.117 "pending_bdev_io": 0, 00:12:16.117 "completed_nvme_io": 0, 00:12:16.117 "transports": [ 00:12:16.117 { 00:12:16.117 "trtype": "TCP" 00:12:16.117 } 00:12:16.117 ] 00:12:16.117 }, 00:12:16.117 { 00:12:16.117 "name": "nvmf_tgt_poll_group_003", 00:12:16.117 "admin_qpairs": 0, 00:12:16.117 "io_qpairs": 0, 00:12:16.117 "current_admin_qpairs": 0, 00:12:16.117 "current_io_qpairs": 0, 00:12:16.117 "pending_bdev_io": 0, 00:12:16.117 "completed_nvme_io": 0, 00:12:16.117 "transports": [ 00:12:16.117 { 00:12:16.117 "trtype": "TCP" 00:12:16.117 } 00:12:16.117 ] 00:12:16.117 } 00:12:16.117 ] 00:12:16.117 }' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 Malloc1 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.117 [2024-07-15 23:55:50.580003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:12:16.117 [2024-07-15 23:55:50.602492] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:12:16.117 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.117 could not add new controller: failed to write to nvme-fabrics device 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.117 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.374 23:55:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.374 23:55:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.938 23:55:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.938 23:55:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:16.938 23:55:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.938 23:55:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:16.938 23:55:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.835 [2024-07-15 23:55:53.260315] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:12:18.835 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.835 could not add new controller: failed to write to nvme-fabrics device 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.835 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.422 23:55:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.422 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:19.422 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.422 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:19.422 23:55:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:21.318 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.576 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.577 [2024-07-15 23:55:55.916831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.577 23:55:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.142 23:55:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.142 23:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:22.142 23:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.142 23:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:22.142 23:55:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:24.038 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:24.038 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:24.038 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 [2024-07-15 23:55:58.456004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.039 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.604 23:55:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.604 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:24.604 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.604 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:24.604 23:55:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:26.503 23:56:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:26.503 23:56:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:26.503 23:56:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.503 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:26.503 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.503 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:26.503 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 [2024-07-15 23:56:01.095154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.761 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.325 23:56:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.325 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:27.325 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.325 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:27.325 23:56:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.220 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 [2024-07-15 23:56:03.654474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.221 23:56:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.785 23:56:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.785 23:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:29.785 23:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.785 23:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:29.785 23:56:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:31.683 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:31.683 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:31.683 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 [2024-07-15 23:56:06.298382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.941 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.514 23:56:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.514 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:32.514 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.514 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:32.514 23:56:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 [2024-07-15 23:56:08.880126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.414 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.415 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.415 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.415 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.415 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.415 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.415 [2024-07-15 23:56:08.928228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 [2024-07-15 23:56:08.976363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 [2024-07-15 23:56:09.024518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 [2024-07-15 23:56:09.072694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.673 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:34.673 "tick_rate": 2700000000, 00:12:34.673 "poll_groups": [ 00:12:34.673 { 00:12:34.673 "name": "nvmf_tgt_poll_group_000", 00:12:34.673 "admin_qpairs": 2, 00:12:34.673 "io_qpairs": 56, 00:12:34.673 "current_admin_qpairs": 0, 00:12:34.673 "current_io_qpairs": 0, 00:12:34.673 "pending_bdev_io": 0, 00:12:34.673 "completed_nvme_io": 107, 00:12:34.673 "transports": [ 00:12:34.673 { 00:12:34.673 "trtype": "TCP" 00:12:34.673 } 00:12:34.673 ] 00:12:34.673 }, 00:12:34.673 { 00:12:34.673 "name": "nvmf_tgt_poll_group_001", 00:12:34.673 "admin_qpairs": 2, 00:12:34.673 "io_qpairs": 56, 00:12:34.673 "current_admin_qpairs": 0, 00:12:34.674 "current_io_qpairs": 0, 00:12:34.674 "pending_bdev_io": 0, 00:12:34.674 "completed_nvme_io": 155, 00:12:34.674 "transports": [ 00:12:34.674 { 00:12:34.674 "trtype": "TCP" 00:12:34.674 } 00:12:34.674 ] 00:12:34.674 }, 00:12:34.674 { 00:12:34.674 "name": "nvmf_tgt_poll_group_002", 00:12:34.674 "admin_qpairs": 1, 00:12:34.674 "io_qpairs": 56, 00:12:34.674 "current_admin_qpairs": 0, 00:12:34.674 "current_io_qpairs": 0, 00:12:34.674 "pending_bdev_io": 0, 00:12:34.674 "completed_nvme_io": 205, 00:12:34.674 "transports": [ 00:12:34.674 { 00:12:34.674 "trtype": "TCP" 00:12:34.674 } 00:12:34.674 ] 00:12:34.674 }, 00:12:34.674 { 00:12:34.674 "name": "nvmf_tgt_poll_group_003", 00:12:34.674 "admin_qpairs": 2, 00:12:34.674 "io_qpairs": 56, 00:12:34.674 "current_admin_qpairs": 0, 00:12:34.674 "current_io_qpairs": 0, 00:12:34.674 "pending_bdev_io": 0, 00:12:34.674 "completed_nvme_io": 107, 00:12:34.674 "transports": [ 00:12:34.674 { 00:12:34.674 "trtype": "TCP" 00:12:34.674 } 00:12:34.674 ] 00:12:34.674 } 00:12:34.674 ] 00:12:34.674 }' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.674 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.931 rmmod nvme_tcp 00:12:34.931 rmmod nvme_fabrics 00:12:34.931 rmmod nvme_keyring 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1209021 ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1209021 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1209021 ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1209021 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1209021 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1209021' 00:12:34.931 killing process with pid 1209021 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1209021 00:12:34.931 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1209021 00:12:35.190 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.191 23:56:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.113 23:56:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.113 00:12:37.113 real 0m23.407s 00:12:37.113 user 1m16.504s 00:12:37.113 sys 0m3.601s 00:12:37.113 23:56:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.113 23:56:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.113 ************************************ 00:12:37.113 END TEST nvmf_rpc 00:12:37.113 ************************************ 00:12:37.113 23:56:11 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.113 23:56:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:37.113 23:56:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.113 23:56:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.113 ************************************ 00:12:37.113 START TEST nvmf_invalid 00:12:37.113 ************************************ 00:12:37.113 23:56:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.393 * Looking for test storage... 00:12:37.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.393 23:56:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:38.769 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:38.769 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.769 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:38.770 Found net devices under 0000:08:00.0: cvl_0_0 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:38.770 Found net devices under 0000:08:00.1: cvl_0_1 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.770 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:39.027 00:12:39.027 --- 10.0.0.2 ping statistics --- 00:12:39.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.027 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:39.027 00:12:39.027 --- 10.0.0.1 ping statistics --- 00:12:39.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.027 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1212291 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1212291 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1212291 ']' 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.027 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.027 [2024-07-15 23:56:13.385129] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:39.027 [2024-07-15 23:56:13.385247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.027 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.027 [2024-07-15 23:56:13.452513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.285 [2024-07-15 23:56:13.543630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.285 [2024-07-15 23:56:13.543683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.285 [2024-07-15 23:56:13.543699] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.285 [2024-07-15 23:56:13.543712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.285 [2024-07-15 23:56:13.543724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.285 [2024-07-15 23:56:13.543814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.285 [2024-07-15 23:56:13.543840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.285 [2024-07-15 23:56:13.543902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.285 [2024-07-15 23:56:13.543905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.285 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12425 00:12:39.542 [2024-07-15 23:56:13.946413] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:39.542 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:39.542 { 00:12:39.542 "nqn": "nqn.2016-06.io.spdk:cnode12425", 00:12:39.542 "tgt_name": "foobar", 00:12:39.542 "method": "nvmf_create_subsystem", 00:12:39.542 "req_id": 1 00:12:39.542 } 00:12:39.542 Got JSON-RPC error response 00:12:39.542 response: 00:12:39.542 { 00:12:39.542 "code": -32603, 00:12:39.542 "message": "Unable to find target foobar" 00:12:39.542 }' 00:12:39.542 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:39.542 { 00:12:39.542 "nqn": "nqn.2016-06.io.spdk:cnode12425", 00:12:39.542 "tgt_name": "foobar", 00:12:39.542 "method": "nvmf_create_subsystem", 00:12:39.542 "req_id": 1 00:12:39.542 } 00:12:39.542 Got JSON-RPC error response 00:12:39.542 response: 00:12:39.542 { 00:12:39.542 "code": -32603, 00:12:39.542 "message": "Unable to find target foobar" 00:12:39.542 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:39.542 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:39.542 23:56:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8513 00:12:39.799 [2024-07-15 23:56:14.247427] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8513: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:39.799 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:39.799 { 00:12:39.799 "nqn": "nqn.2016-06.io.spdk:cnode8513", 00:12:39.799 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.799 "method": "nvmf_create_subsystem", 00:12:39.799 "req_id": 1 00:12:39.799 } 00:12:39.799 Got JSON-RPC error response 00:12:39.799 response: 00:12:39.799 { 00:12:39.799 "code": -32602, 00:12:39.799 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.799 }' 00:12:39.799 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:39.799 { 00:12:39.799 "nqn": "nqn.2016-06.io.spdk:cnode8513", 00:12:39.799 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.799 "method": "nvmf_create_subsystem", 00:12:39.799 "req_id": 1 00:12:39.799 } 00:12:39.799 Got JSON-RPC error response 00:12:39.799 response: 00:12:39.799 { 00:12:39.799 "code": -32602, 00:12:39.799 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.799 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:39.799 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:39.799 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8607 00:12:40.055 [2024-07-15 23:56:14.548423] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8607: invalid model number 'SPDK_Controller' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:40.311 { 00:12:40.311 "nqn": "nqn.2016-06.io.spdk:cnode8607", 00:12:40.311 "model_number": "SPDK_Controller\u001f", 00:12:40.311 "method": "nvmf_create_subsystem", 00:12:40.311 "req_id": 1 00:12:40.311 } 00:12:40.311 Got JSON-RPC error response 00:12:40.311 response: 00:12:40.311 { 00:12:40.311 "code": -32602, 00:12:40.311 "message": "Invalid MN SPDK_Controller\u001f" 00:12:40.311 }' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:40.311 { 00:12:40.311 "nqn": "nqn.2016-06.io.spdk:cnode8607", 00:12:40.311 "model_number": "SPDK_Controller\u001f", 00:12:40.311 "method": "nvmf_create_subsystem", 00:12:40.311 "req_id": 1 00:12:40.311 } 00:12:40.311 Got JSON-RPC error response 00:12:40.311 response: 00:12:40.311 { 00:12:40.311 "code": -32602, 00:12:40.311 "message": "Invalid MN SPDK_Controller\u001f" 00:12:40.311 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.311 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wH_<(=X:Y,9iQ(lin09(<' 00:12:40.312 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'wH_<(=X:Y,9iQ(lin09(<' nqn.2016-06.io.spdk:cnode9770 00:12:40.570 [2024-07-15 23:56:14.925698] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9770: invalid serial number 'wH_<(=X:Y,9iQ(lin09(<' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:40.570 { 00:12:40.570 "nqn": "nqn.2016-06.io.spdk:cnode9770", 00:12:40.570 "serial_number": "wH_<(=X:Y,9iQ(lin09(<", 00:12:40.570 "method": "nvmf_create_subsystem", 00:12:40.570 "req_id": 1 00:12:40.570 } 00:12:40.570 Got JSON-RPC error response 00:12:40.570 response: 00:12:40.570 { 00:12:40.570 "code": -32602, 00:12:40.570 "message": "Invalid SN wH_<(=X:Y,9iQ(lin09(<" 00:12:40.570 }' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:40.570 { 00:12:40.570 "nqn": "nqn.2016-06.io.spdk:cnode9770", 00:12:40.570 "serial_number": "wH_<(=X:Y,9iQ(lin09(<", 00:12:40.570 "method": "nvmf_create_subsystem", 00:12:40.570 "req_id": 1 00:12:40.570 } 00:12:40.570 Got JSON-RPC error response 00:12:40.570 response: 00:12:40.570 { 00:12:40.570 "code": -32602, 00:12:40.570 "message": "Invalid SN wH_<(=X:Y,9iQ(lin09(<" 00:12:40.570 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:40.570 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'B$xp'\''6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9' 00:12:40.571 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'B$xp'\''6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9' nqn.2016-06.io.spdk:cnode3515 00:12:41.136 [2024-07-15 23:56:15.343011] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3515: invalid model number 'B$xp'6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9' 00:12:41.136 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:41.136 { 00:12:41.136 "nqn": "nqn.2016-06.io.spdk:cnode3515", 00:12:41.136 "model_number": "B$xp'\''6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9", 00:12:41.136 "method": "nvmf_create_subsystem", 00:12:41.136 "req_id": 1 00:12:41.136 } 00:12:41.136 Got JSON-RPC error response 00:12:41.136 response: 00:12:41.136 { 00:12:41.136 "code": -32602, 00:12:41.136 "message": "Invalid MN B$xp'\''6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9" 00:12:41.136 }' 00:12:41.136 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:41.136 { 00:12:41.136 "nqn": "nqn.2016-06.io.spdk:cnode3515", 00:12:41.136 "model_number": "B$xp'6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9", 00:12:41.136 "method": "nvmf_create_subsystem", 00:12:41.136 "req_id": 1 00:12:41.136 } 00:12:41.136 Got JSON-RPC error response 00:12:41.136 response: 00:12:41.136 { 00:12:41.136 "code": -32602, 00:12:41.136 "message": "Invalid MN B$xp'6O]Kqwx0((3Czet/5]0yN>&So?Wd8Lst sA9" 00:12:41.136 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:41.136 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:41.136 [2024-07-15 23:56:15.640068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.393 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:41.650 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:41.650 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:41.650 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:41.650 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:41.650 23:56:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:41.907 [2024-07-15 23:56:16.242109] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:41.907 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:41.907 { 00:12:41.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.907 "listen_address": { 00:12:41.907 "trtype": "tcp", 00:12:41.907 "traddr": "", 00:12:41.907 "trsvcid": "4421" 00:12:41.907 }, 00:12:41.907 "method": "nvmf_subsystem_remove_listener", 00:12:41.907 "req_id": 1 00:12:41.907 } 00:12:41.907 Got JSON-RPC error response 00:12:41.907 response: 00:12:41.907 { 00:12:41.907 "code": -32602, 00:12:41.907 "message": "Invalid parameters" 00:12:41.907 }' 00:12:41.907 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:41.907 { 00:12:41.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:41.907 "listen_address": { 00:12:41.907 "trtype": "tcp", 00:12:41.907 "traddr": "", 00:12:41.907 "trsvcid": "4421" 00:12:41.907 }, 00:12:41.907 "method": "nvmf_subsystem_remove_listener", 00:12:41.907 "req_id": 1 00:12:41.908 } 00:12:41.908 Got JSON-RPC error response 00:12:41.908 response: 00:12:41.908 { 00:12:41.908 "code": -32602, 00:12:41.908 "message": "Invalid parameters" 00:12:41.908 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:41.908 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13229 -i 0 00:12:42.166 [2024-07-15 23:56:16.535005] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13229: invalid cntlid range [0-65519] 00:12:42.166 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:42.166 { 00:12:42.166 "nqn": "nqn.2016-06.io.spdk:cnode13229", 00:12:42.166 "min_cntlid": 0, 00:12:42.166 "method": "nvmf_create_subsystem", 00:12:42.166 "req_id": 1 00:12:42.166 } 00:12:42.166 Got JSON-RPC error response 00:12:42.166 response: 00:12:42.166 { 00:12:42.166 "code": -32602, 00:12:42.166 "message": "Invalid cntlid range [0-65519]" 00:12:42.166 }' 00:12:42.166 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:42.166 { 00:12:42.166 "nqn": "nqn.2016-06.io.spdk:cnode13229", 00:12:42.166 "min_cntlid": 0, 00:12:42.166 "method": "nvmf_create_subsystem", 00:12:42.166 "req_id": 1 00:12:42.166 } 00:12:42.166 Got JSON-RPC error response 00:12:42.166 response: 00:12:42.166 { 00:12:42.166 "code": -32602, 00:12:42.166 "message": "Invalid cntlid range [0-65519]" 00:12:42.166 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.166 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1706 -i 65520 00:12:42.424 [2024-07-15 23:56:16.832000] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1706: invalid cntlid range [65520-65519] 00:12:42.424 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:42.424 { 00:12:42.424 "nqn": "nqn.2016-06.io.spdk:cnode1706", 00:12:42.424 "min_cntlid": 65520, 00:12:42.424 "method": "nvmf_create_subsystem", 00:12:42.424 "req_id": 1 00:12:42.424 } 00:12:42.424 Got JSON-RPC error response 00:12:42.424 response: 00:12:42.424 { 00:12:42.424 "code": -32602, 00:12:42.424 "message": "Invalid cntlid range [65520-65519]" 00:12:42.424 }' 00:12:42.424 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:42.424 { 00:12:42.424 "nqn": "nqn.2016-06.io.spdk:cnode1706", 00:12:42.424 "min_cntlid": 65520, 00:12:42.424 "method": "nvmf_create_subsystem", 00:12:42.424 "req_id": 1 00:12:42.424 } 00:12:42.424 Got JSON-RPC error response 00:12:42.424 response: 00:12:42.424 { 00:12:42.424 "code": -32602, 00:12:42.424 "message": "Invalid cntlid range [65520-65519]" 00:12:42.424 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.424 23:56:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30304 -I 0 00:12:42.681 [2024-07-15 23:56:17.128956] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30304: invalid cntlid range [1-0] 00:12:42.681 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:42.681 { 00:12:42.681 "nqn": "nqn.2016-06.io.spdk:cnode30304", 00:12:42.681 "max_cntlid": 0, 00:12:42.681 "method": "nvmf_create_subsystem", 00:12:42.681 "req_id": 1 00:12:42.681 } 00:12:42.681 Got JSON-RPC error response 00:12:42.681 response: 00:12:42.681 { 00:12:42.681 "code": -32602, 00:12:42.681 "message": "Invalid cntlid range [1-0]" 00:12:42.681 }' 00:12:42.681 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:42.682 { 00:12:42.682 "nqn": "nqn.2016-06.io.spdk:cnode30304", 00:12:42.682 "max_cntlid": 0, 00:12:42.682 "method": "nvmf_create_subsystem", 00:12:42.682 "req_id": 1 00:12:42.682 } 00:12:42.682 Got JSON-RPC error response 00:12:42.682 response: 00:12:42.682 { 00:12:42.682 "code": -32602, 00:12:42.682 "message": "Invalid cntlid range [1-0]" 00:12:42.682 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.682 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17833 -I 65520 00:12:42.939 [2024-07-15 23:56:17.381745] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17833: invalid cntlid range [1-65520] 00:12:42.939 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:42.939 { 00:12:42.939 "nqn": "nqn.2016-06.io.spdk:cnode17833", 00:12:42.939 "max_cntlid": 65520, 00:12:42.939 "method": "nvmf_create_subsystem", 00:12:42.939 "req_id": 1 00:12:42.939 } 00:12:42.939 Got JSON-RPC error response 00:12:42.939 response: 00:12:42.939 { 00:12:42.939 "code": -32602, 00:12:42.939 "message": "Invalid cntlid range [1-65520]" 00:12:42.939 }' 00:12:42.939 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:42.939 { 00:12:42.939 "nqn": "nqn.2016-06.io.spdk:cnode17833", 00:12:42.939 "max_cntlid": 65520, 00:12:42.939 "method": "nvmf_create_subsystem", 00:12:42.939 "req_id": 1 00:12:42.939 } 00:12:42.939 Got JSON-RPC error response 00:12:42.939 response: 00:12:42.939 { 00:12:42.939 "code": -32602, 00:12:42.939 "message": "Invalid cntlid range [1-65520]" 00:12:42.939 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:42.939 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12978 -i 6 -I 5 00:12:43.196 [2024-07-15 23:56:17.626567] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12978: invalid cntlid range [6-5] 00:12:43.196 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:43.196 { 00:12:43.196 "nqn": "nqn.2016-06.io.spdk:cnode12978", 00:12:43.196 "min_cntlid": 6, 00:12:43.196 "max_cntlid": 5, 00:12:43.196 "method": "nvmf_create_subsystem", 00:12:43.196 "req_id": 1 00:12:43.196 } 00:12:43.196 Got JSON-RPC error response 00:12:43.196 response: 00:12:43.196 { 00:12:43.196 "code": -32602, 00:12:43.196 "message": "Invalid cntlid range [6-5]" 00:12:43.196 }' 00:12:43.196 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:43.196 { 00:12:43.196 "nqn": "nqn.2016-06.io.spdk:cnode12978", 00:12:43.196 "min_cntlid": 6, 00:12:43.196 "max_cntlid": 5, 00:12:43.196 "method": "nvmf_create_subsystem", 00:12:43.196 "req_id": 1 00:12:43.196 } 00:12:43.196 Got JSON-RPC error response 00:12:43.196 response: 00:12:43.196 { 00:12:43.196 "code": -32602, 00:12:43.196 "message": "Invalid cntlid range [6-5]" 00:12:43.196 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:43.196 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:43.454 { 00:12:43.454 "name": "foobar", 00:12:43.454 "method": "nvmf_delete_target", 00:12:43.454 "req_id": 1 00:12:43.454 } 00:12:43.454 Got JSON-RPC error response 00:12:43.454 response: 00:12:43.454 { 00:12:43.454 "code": -32602, 00:12:43.454 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:43.454 }' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:43.454 { 00:12:43.454 "name": "foobar", 00:12:43.454 "method": "nvmf_delete_target", 00:12:43.454 "req_id": 1 00:12:43.454 } 00:12:43.454 Got JSON-RPC error response 00:12:43.454 response: 00:12:43.454 { 00:12:43.454 "code": -32602, 00:12:43.454 "message": "The specified target doesn't exist, cannot delete it." 00:12:43.454 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.454 rmmod nvme_tcp 00:12:43.454 rmmod nvme_fabrics 00:12:43.454 rmmod nvme_keyring 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1212291 ']' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1212291 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1212291 ']' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1212291 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1212291 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1212291' 00:12:43.454 killing process with pid 1212291 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1212291 00:12:43.454 23:56:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1212291 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.713 23:56:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.617 23:56:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.617 00:12:45.617 real 0m8.475s 00:12:45.617 user 0m21.723s 00:12:45.617 sys 0m2.117s 00:12:45.617 23:56:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:45.617 23:56:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.617 ************************************ 00:12:45.618 END TEST nvmf_invalid 00:12:45.618 ************************************ 00:12:45.618 23:56:20 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:45.618 23:56:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:45.618 23:56:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.618 23:56:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.618 ************************************ 00:12:45.618 START TEST nvmf_abort 00:12:45.618 ************************************ 00:12:45.618 23:56:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:45.618 * Looking for test storage... 00:12:45.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.876 23:56:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.784 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:47.785 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:47.785 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:47.785 Found net devices under 0000:08:00.0: cvl_0_0 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:47.785 Found net devices under 0000:08:00.1: cvl_0_1 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:12:47.785 00:12:47.785 --- 10.0.0.2 ping statistics --- 00:12:47.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.785 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:12:47.785 00:12:47.785 --- 10.0.0.1 ping statistics --- 00:12:47.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.785 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1214334 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1214334 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1214334 ']' 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.785 23:56:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:47.785 [2024-07-15 23:56:22.041763] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:47.785 [2024-07-15 23:56:22.041851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.785 [2024-07-15 23:56:22.106161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.785 [2024-07-15 23:56:22.192957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.785 [2024-07-15 23:56:22.193015] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.785 [2024-07-15 23:56:22.193032] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.785 [2024-07-15 23:56:22.193046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.785 [2024-07-15 23:56:22.193059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.785 [2024-07-15 23:56:22.193153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.785 [2024-07-15 23:56:22.193205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.785 [2024-07-15 23:56:22.193238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.785 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.785 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:12:47.785 23:56:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.785 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.785 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 [2024-07-15 23:56:22.324318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 Malloc0 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 Delay0 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 [2024-07-15 23:56:22.397462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.044 23:56:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:48.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.044 [2024-07-15 23:56:22.461840] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:50.572 Initializing NVMe Controllers 00:12:50.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:50.572 controller IO queue size 128 less than required 00:12:50.572 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:50.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:50.572 Initialization complete. Launching workers. 00:12:50.572 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29668 00:12:50.572 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29733, failed to submit 62 00:12:50.572 success 29672, unsuccess 61, failed 0 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.572 rmmod nvme_tcp 00:12:50.572 rmmod nvme_fabrics 00:12:50.572 rmmod nvme_keyring 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1214334 ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1214334 ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1214334' 00:12:50.572 killing process with pid 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1214334 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.572 23:56:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.479 23:56:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.479 00:12:52.479 real 0m6.779s 00:12:52.479 user 0m10.066s 00:12:52.479 sys 0m2.215s 00:12:52.479 23:56:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.479 23:56:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:52.479 ************************************ 00:12:52.479 END TEST nvmf_abort 00:12:52.479 ************************************ 00:12:52.479 23:56:26 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.479 23:56:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:52.479 23:56:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.479 23:56:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.479 ************************************ 00:12:52.479 START TEST nvmf_ns_hotplug_stress 00:12:52.479 ************************************ 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:52.479 * Looking for test storage... 00:12:52.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.479 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.480 23:56:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:54.413 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:54.413 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:54.413 Found net devices under 0000:08:00.0: cvl_0_0 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.413 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:54.414 Found net devices under 0000:08:00.1: cvl_0_1 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:12:54.414 00:12:54.414 --- 10.0.0.2 ping statistics --- 00:12:54.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.414 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:12:54.414 00:12:54.414 --- 10.0.0.1 ping statistics --- 00:12:54.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.414 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1216017 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1216017 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1216017 ']' 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.414 23:56:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.414 [2024-07-15 23:56:28.770309] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:54.414 [2024-07-15 23:56:28.770399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.414 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.414 [2024-07-15 23:56:28.835055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:54.414 [2024-07-15 23:56:28.921935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.414 [2024-07-15 23:56:28.921993] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.414 [2024-07-15 23:56:28.922010] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.414 [2024-07-15 23:56:28.922023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.414 [2024-07-15 23:56:28.922035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.414 [2024-07-15 23:56:28.922120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.414 [2024-07-15 23:56:28.922174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.414 [2024-07-15 23:56:28.922178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:54.672 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.930 [2024-07-15 23:56:29.318194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.930 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.188 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.446 [2024-07-15 23:56:29.909665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.446 23:56:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.012 23:56:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:56.269 Malloc0 00:12:56.269 23:56:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:56.527 Delay0 00:12:56.527 23:56:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.784 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:57.040 NULL1 00:12:57.040 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:57.296 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1216331 00:12:57.296 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:57.296 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:12:57.296 23:56:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.296 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.666 Read completed with error (sct=0, sc=11) 00:12:58.666 23:56:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.666 23:56:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:58.666 23:56:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:58.929 true 00:12:58.929 23:56:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:12:58.929 23:56:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.859 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.117 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:00.117 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:00.374 true 00:13:00.374 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:00.374 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.632 23:56:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.890 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:00.890 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:01.148 true 00:13:01.148 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:01.148 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.406 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.406 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:01.406 23:56:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:01.664 true 00:13:01.664 23:56:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:01.664 23:56:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.037 23:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.037 23:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:03.037 23:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:03.295 true 00:13:03.552 23:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:03.552 23:56:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.118 23:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.376 23:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:04.376 23:56:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:04.634 true 00:13:04.634 23:56:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:04.634 23:56:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.199 23:56:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.457 23:56:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:05.457 23:56:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:05.714 true 00:13:05.714 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:05.715 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.972 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.230 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:06.230 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:06.488 true 00:13:06.488 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:06.488 23:56:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.422 23:56:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.680 23:56:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:07.680 23:56:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:07.938 true 00:13:07.938 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:07.938 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.196 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.454 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:08.454 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:08.760 true 00:13:08.760 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:08.760 23:56:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.336 23:56:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.901 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:09.901 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:09.901 true 00:13:10.158 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:10.158 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.158 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.417 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:10.417 23:56:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:10.674 true 00:13:10.674 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:10.674 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.932 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.189 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:11.189 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:11.446 true 00:13:11.446 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:11.446 23:56:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 23:56:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.819 23:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:12.819 23:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:13.076 true 00:13:13.076 23:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:13.076 23:56:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.006 23:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.263 23:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:14.263 23:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:14.521 true 00:13:14.521 23:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:14.521 23:56:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.778 23:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.778 23:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:14.778 23:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:15.036 true 00:13:15.036 23:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:15.036 23:56:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.967 23:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.224 23:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:16.224 23:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:16.483 true 00:13:16.483 23:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:16.483 23:56:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.740 23:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.997 23:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:16.997 23:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:17.253 true 00:13:17.253 23:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:17.253 23:56:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.182 23:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.440 23:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:18.440 23:56:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:18.697 true 00:13:18.697 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:18.697 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.954 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.211 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:19.211 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:19.468 true 00:13:19.468 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:19.468 23:56:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.725 23:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.982 23:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:19.982 23:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:20.239 true 00:13:20.239 23:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:20.239 23:56:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.171 23:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.429 23:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:21.429 23:56:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:21.995 true 00:13:21.995 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:21.995 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.995 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.253 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:22.253 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:22.511 true 00:13:22.511 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:22.511 23:56:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.768 23:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.026 23:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:23.026 23:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:23.283 true 00:13:23.284 23:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:23.284 23:56:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.279 23:56:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.538 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:24.538 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:24.796 true 00:13:25.054 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:25.054 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.311 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.569 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:25.569 23:56:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:25.825 true 00:13:25.825 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:25.825 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.082 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.339 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:26.339 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:26.596 true 00:13:26.596 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:26.596 23:57:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.527 23:57:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.527 Initializing NVMe Controllers 00:13:27.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:27.527 Controller IO queue size 128, less than required. 00:13:27.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.527 Controller IO queue size 128, less than required. 00:13:27.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:27.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:27.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:27.527 Initialization complete. Launching workers. 00:13:27.527 ======================================================== 00:13:27.527 Latency(us) 00:13:27.527 Device Information : IOPS MiB/s Average min max 00:13:27.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1049.84 0.51 60854.72 3132.42 1014047.55 00:13:27.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9185.88 4.49 13936.16 3890.09 531458.16 00:13:27.527 ======================================================== 00:13:27.527 Total : 10235.73 5.00 18748.43 3132.42 1014047.55 00:13:27.527 00:13:27.784 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:27.784 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:28.041 true 00:13:28.041 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1216331 00:13:28.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1216331) - No such process 00:13:28.041 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1216331 00:13:28.041 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.298 23:57:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:28.568 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:28.568 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:28.568 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:28.568 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.568 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:28.826 null0 00:13:29.083 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.083 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.083 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:29.340 null1 00:13:29.340 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.340 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.340 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:29.597 null2 00:13:29.597 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.597 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.597 23:57:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:29.855 null3 00:13:29.855 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:29.855 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:29.855 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:30.113 null4 00:13:30.113 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.113 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.113 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:30.371 null5 00:13:30.371 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.371 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.371 23:57:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:30.629 null6 00:13:30.629 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.629 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.629 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:30.888 null7 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:30.888 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1219492 1219493 1219495 1219497 1219499 1219501 1219503 1219505 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.889 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.147 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.147 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.147 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.147 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.147 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.405 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.663 23:57:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.663 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.922 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.180 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.438 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.439 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.439 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.697 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.955 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.213 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.470 23:57:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.728 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.984 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.985 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.985 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.242 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:34.500 23:57:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:34.500 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.758 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.017 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.327 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.585 23:57:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.585 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.843 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.101 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.359 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.616 23:57:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:36.616 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.617 rmmod nvme_tcp 00:13:36.617 rmmod nvme_fabrics 00:13:36.617 rmmod nvme_keyring 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1216017 ']' 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1216017 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1216017 ']' 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1216017 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.617 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1216017 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1216017' 00:13:36.875 killing process with pid 1216017 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1216017 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1216017 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.875 23:57:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.434 23:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.434 00:13:39.434 real 0m46.472s 00:13:39.434 user 3m35.381s 00:13:39.434 sys 0m15.020s 00:13:39.434 23:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.434 23:57:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.434 ************************************ 00:13:39.434 END TEST nvmf_ns_hotplug_stress 00:13:39.434 ************************************ 00:13:39.434 23:57:13 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.434 23:57:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:39.434 23:57:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.434 23:57:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.434 ************************************ 00:13:39.434 START TEST nvmf_connect_stress 00:13:39.434 ************************************ 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:39.434 * Looking for test storage... 00:13:39.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.434 23:57:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:40.810 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:40.811 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:40.811 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:40.811 Found net devices under 0000:08:00.0: cvl_0_0 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:40.811 Found net devices under 0000:08:00.1: cvl_0_1 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:13:40.811 00:13:40.811 --- 10.0.0.2 ping statistics --- 00:13:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.811 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:13:40.811 00:13:40.811 --- 10.0.0.1 ping statistics --- 00:13:40.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.811 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1222120 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1222120 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1222120 ']' 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.811 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.811 [2024-07-15 23:57:15.299003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:40.811 [2024-07-15 23:57:15.299121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.069 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.069 [2024-07-15 23:57:15.365362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.069 [2024-07-15 23:57:15.455035] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.069 [2024-07-15 23:57:15.455093] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.069 [2024-07-15 23:57:15.455109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.069 [2024-07-15 23:57:15.455122] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.069 [2024-07-15 23:57:15.455134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.069 [2024-07-15 23:57:15.455273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.069 [2024-07-15 23:57:15.455303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.069 [2024-07-15 23:57:15.455306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.069 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.069 [2024-07-15 23:57:15.579425] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.331 [2024-07-15 23:57:15.608274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.331 NULL1 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1222152 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.331 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.332 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.589 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.589 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:41.589 23:57:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.589 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.589 23:57:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.846 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.846 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:41.846 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.846 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.846 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.409 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.409 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:42.409 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.409 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.409 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.666 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.666 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:42.666 23:57:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.666 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.666 23:57:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.923 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.923 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:42.923 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.923 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.923 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.180 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.180 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:43.180 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.180 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.180 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.437 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:43.437 23:57:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.437 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.437 23:57:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.002 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.002 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:44.002 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.002 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.002 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.259 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.260 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:44.260 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.260 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.260 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.516 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.516 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:44.516 23:57:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.516 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.516 23:57:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.773 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.773 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:44.773 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.773 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.773 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.030 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.030 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:45.030 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.030 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.030 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.595 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.595 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:45.595 23:57:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.595 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.595 23:57:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.853 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.853 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:45.853 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.853 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.853 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.110 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.110 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:46.110 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.110 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.110 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.368 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.368 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:46.368 23:57:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.368 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.368 23:57:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.625 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.625 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:46.625 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.625 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.625 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.190 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:47.190 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.190 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.190 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.448 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.448 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:47.448 23:57:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.448 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.448 23:57:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.706 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.706 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:47.706 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.706 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.706 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.963 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.963 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:47.963 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.963 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.963 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.528 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.528 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:48.528 23:57:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.528 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.528 23:57:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.786 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.786 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:48.786 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.786 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.786 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.043 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.043 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:49.043 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.043 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.043 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.299 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.299 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:49.299 23:57:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.299 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.299 23:57:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.555 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.555 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:49.555 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.555 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.555 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.119 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.119 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:50.119 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.119 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.119 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.376 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.376 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:50.376 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.376 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.376 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.634 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.634 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:50.634 23:57:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.634 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.634 23:57:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.891 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.891 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:50.891 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.891 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.891 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.149 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.149 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:51.149 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.149 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.149 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.406 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1222152 00:13:51.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1222152) - No such process 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1222152 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.665 rmmod nvme_tcp 00:13:51.665 rmmod nvme_fabrics 00:13:51.665 rmmod nvme_keyring 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1222120 ']' 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1222120 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1222120 ']' 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1222120 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:13:51.665 23:57:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1222120 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1222120' 00:13:51.665 killing process with pid 1222120 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1222120 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1222120 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.665 23:57:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.925 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.925 23:57:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.862 23:57:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.862 00:13:53.862 real 0m14.826s 00:13:53.862 user 0m38.391s 00:13:53.862 sys 0m5.275s 00:13:53.862 23:57:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.862 23:57:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.862 ************************************ 00:13:53.862 END TEST nvmf_connect_stress 00:13:53.862 ************************************ 00:13:53.862 23:57:28 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:53.862 23:57:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:53.862 23:57:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.862 23:57:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.862 ************************************ 00:13:53.862 START TEST nvmf_fused_ordering 00:13:53.862 ************************************ 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:53.862 * Looking for test storage... 00:13:53.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.862 23:57:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:55.763 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.763 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:55.764 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:55.764 Found net devices under 0000:08:00.0: cvl_0_0 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:55.764 Found net devices under 0000:08:00.1: cvl_0_1 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.764 23:57:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:13:55.764 00:13:55.764 --- 10.0.0.2 ping statistics --- 00:13:55.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.764 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:55.764 00:13:55.764 --- 10.0.0.1 ping statistics --- 00:13:55.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.764 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1224506 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1224506 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1224506 ']' 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.764 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 [2024-07-15 23:57:30.169781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:55.764 [2024-07-15 23:57:30.169892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.764 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.764 [2024-07-15 23:57:30.236605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.022 [2024-07-15 23:57:30.322753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.022 [2024-07-15 23:57:30.322814] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.022 [2024-07-15 23:57:30.322831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.022 [2024-07-15 23:57:30.322845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.022 [2024-07-15 23:57:30.322857] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.022 [2024-07-15 23:57:30.322895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 [2024-07-15 23:57:30.453130] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 [2024-07-15 23:57:30.469291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 NULL1 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.022 23:57:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:56.022 [2024-07-15 23:57:30.515234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:56.022 [2024-07-15 23:57:30.515288] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224550 ] 00:13:56.280 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.536 Attached to nqn.2016-06.io.spdk:cnode1 00:13:56.536 Namespace ID: 1 size: 1GB 00:13:56.536 fused_ordering(0) 00:13:56.536 fused_ordering(1) 00:13:56.536 fused_ordering(2) 00:13:56.536 fused_ordering(3) 00:13:56.536 fused_ordering(4) 00:13:56.536 fused_ordering(5) 00:13:56.536 fused_ordering(6) 00:13:56.536 fused_ordering(7) 00:13:56.536 fused_ordering(8) 00:13:56.536 fused_ordering(9) 00:13:56.536 fused_ordering(10) 00:13:56.536 fused_ordering(11) 00:13:56.536 fused_ordering(12) 00:13:56.536 fused_ordering(13) 00:13:56.536 fused_ordering(14) 00:13:56.536 fused_ordering(15) 00:13:56.536 fused_ordering(16) 00:13:56.536 fused_ordering(17) 00:13:56.536 fused_ordering(18) 00:13:56.536 fused_ordering(19) 00:13:56.536 fused_ordering(20) 00:13:56.536 fused_ordering(21) 00:13:56.536 fused_ordering(22) 00:13:56.536 fused_ordering(23) 00:13:56.536 fused_ordering(24) 00:13:56.536 fused_ordering(25) 00:13:56.536 fused_ordering(26) 00:13:56.536 fused_ordering(27) 00:13:56.536 fused_ordering(28) 00:13:56.536 fused_ordering(29) 00:13:56.536 fused_ordering(30) 00:13:56.536 fused_ordering(31) 00:13:56.536 fused_ordering(32) 00:13:56.536 fused_ordering(33) 00:13:56.536 fused_ordering(34) 00:13:56.536 fused_ordering(35) 00:13:56.536 fused_ordering(36) 00:13:56.536 fused_ordering(37) 00:13:56.536 fused_ordering(38) 00:13:56.536 fused_ordering(39) 00:13:56.536 fused_ordering(40) 00:13:56.536 fused_ordering(41) 00:13:56.536 fused_ordering(42) 00:13:56.536 fused_ordering(43) 00:13:56.536 fused_ordering(44) 00:13:56.536 fused_ordering(45) 00:13:56.536 fused_ordering(46) 00:13:56.536 fused_ordering(47) 00:13:56.536 fused_ordering(48) 00:13:56.536 fused_ordering(49) 00:13:56.536 fused_ordering(50) 00:13:56.536 fused_ordering(51) 00:13:56.536 fused_ordering(52) 00:13:56.536 fused_ordering(53) 00:13:56.536 fused_ordering(54) 00:13:56.536 fused_ordering(55) 00:13:56.536 fused_ordering(56) 00:13:56.536 fused_ordering(57) 00:13:56.536 fused_ordering(58) 00:13:56.536 fused_ordering(59) 00:13:56.536 fused_ordering(60) 00:13:56.536 fused_ordering(61) 00:13:56.536 fused_ordering(62) 00:13:56.536 fused_ordering(63) 00:13:56.536 fused_ordering(64) 00:13:56.536 fused_ordering(65) 00:13:56.536 fused_ordering(66) 00:13:56.536 fused_ordering(67) 00:13:56.536 fused_ordering(68) 00:13:56.536 fused_ordering(69) 00:13:56.536 fused_ordering(70) 00:13:56.536 fused_ordering(71) 00:13:56.536 fused_ordering(72) 00:13:56.536 fused_ordering(73) 00:13:56.536 fused_ordering(74) 00:13:56.536 fused_ordering(75) 00:13:56.536 fused_ordering(76) 00:13:56.536 fused_ordering(77) 00:13:56.536 fused_ordering(78) 00:13:56.536 fused_ordering(79) 00:13:56.536 fused_ordering(80) 00:13:56.536 fused_ordering(81) 00:13:56.536 fused_ordering(82) 00:13:56.536 fused_ordering(83) 00:13:56.536 fused_ordering(84) 00:13:56.536 fused_ordering(85) 00:13:56.536 fused_ordering(86) 00:13:56.536 fused_ordering(87) 00:13:56.536 fused_ordering(88) 00:13:56.536 fused_ordering(89) 00:13:56.536 fused_ordering(90) 00:13:56.536 fused_ordering(91) 00:13:56.536 fused_ordering(92) 00:13:56.536 fused_ordering(93) 00:13:56.536 fused_ordering(94) 00:13:56.536 fused_ordering(95) 00:13:56.536 fused_ordering(96) 00:13:56.536 fused_ordering(97) 00:13:56.536 fused_ordering(98) 00:13:56.536 fused_ordering(99) 00:13:56.537 fused_ordering(100) 00:13:56.537 fused_ordering(101) 00:13:56.537 fused_ordering(102) 00:13:56.537 fused_ordering(103) 00:13:56.537 fused_ordering(104) 00:13:56.537 fused_ordering(105) 00:13:56.537 fused_ordering(106) 00:13:56.537 fused_ordering(107) 00:13:56.537 fused_ordering(108) 00:13:56.537 fused_ordering(109) 00:13:56.537 fused_ordering(110) 00:13:56.537 fused_ordering(111) 00:13:56.537 fused_ordering(112) 00:13:56.537 fused_ordering(113) 00:13:56.537 fused_ordering(114) 00:13:56.537 fused_ordering(115) 00:13:56.537 fused_ordering(116) 00:13:56.537 fused_ordering(117) 00:13:56.537 fused_ordering(118) 00:13:56.537 fused_ordering(119) 00:13:56.537 fused_ordering(120) 00:13:56.537 fused_ordering(121) 00:13:56.537 fused_ordering(122) 00:13:56.537 fused_ordering(123) 00:13:56.537 fused_ordering(124) 00:13:56.537 fused_ordering(125) 00:13:56.537 fused_ordering(126) 00:13:56.537 fused_ordering(127) 00:13:56.537 fused_ordering(128) 00:13:56.537 fused_ordering(129) 00:13:56.537 fused_ordering(130) 00:13:56.537 fused_ordering(131) 00:13:56.537 fused_ordering(132) 00:13:56.537 fused_ordering(133) 00:13:56.537 fused_ordering(134) 00:13:56.537 fused_ordering(135) 00:13:56.537 fused_ordering(136) 00:13:56.537 fused_ordering(137) 00:13:56.537 fused_ordering(138) 00:13:56.537 fused_ordering(139) 00:13:56.537 fused_ordering(140) 00:13:56.537 fused_ordering(141) 00:13:56.537 fused_ordering(142) 00:13:56.537 fused_ordering(143) 00:13:56.537 fused_ordering(144) 00:13:56.537 fused_ordering(145) 00:13:56.537 fused_ordering(146) 00:13:56.537 fused_ordering(147) 00:13:56.537 fused_ordering(148) 00:13:56.537 fused_ordering(149) 00:13:56.537 fused_ordering(150) 00:13:56.537 fused_ordering(151) 00:13:56.537 fused_ordering(152) 00:13:56.537 fused_ordering(153) 00:13:56.537 fused_ordering(154) 00:13:56.537 fused_ordering(155) 00:13:56.537 fused_ordering(156) 00:13:56.537 fused_ordering(157) 00:13:56.537 fused_ordering(158) 00:13:56.537 fused_ordering(159) 00:13:56.537 fused_ordering(160) 00:13:56.537 fused_ordering(161) 00:13:56.537 fused_ordering(162) 00:13:56.537 fused_ordering(163) 00:13:56.537 fused_ordering(164) 00:13:56.537 fused_ordering(165) 00:13:56.537 fused_ordering(166) 00:13:56.537 fused_ordering(167) 00:13:56.537 fused_ordering(168) 00:13:56.537 fused_ordering(169) 00:13:56.537 fused_ordering(170) 00:13:56.537 fused_ordering(171) 00:13:56.537 fused_ordering(172) 00:13:56.537 fused_ordering(173) 00:13:56.537 fused_ordering(174) 00:13:56.537 fused_ordering(175) 00:13:56.537 fused_ordering(176) 00:13:56.537 fused_ordering(177) 00:13:56.537 fused_ordering(178) 00:13:56.537 fused_ordering(179) 00:13:56.537 fused_ordering(180) 00:13:56.537 fused_ordering(181) 00:13:56.537 fused_ordering(182) 00:13:56.537 fused_ordering(183) 00:13:56.537 fused_ordering(184) 00:13:56.537 fused_ordering(185) 00:13:56.537 fused_ordering(186) 00:13:56.537 fused_ordering(187) 00:13:56.537 fused_ordering(188) 00:13:56.537 fused_ordering(189) 00:13:56.537 fused_ordering(190) 00:13:56.537 fused_ordering(191) 00:13:56.537 fused_ordering(192) 00:13:56.537 fused_ordering(193) 00:13:56.537 fused_ordering(194) 00:13:56.537 fused_ordering(195) 00:13:56.537 fused_ordering(196) 00:13:56.537 fused_ordering(197) 00:13:56.537 fused_ordering(198) 00:13:56.537 fused_ordering(199) 00:13:56.537 fused_ordering(200) 00:13:56.537 fused_ordering(201) 00:13:56.537 fused_ordering(202) 00:13:56.537 fused_ordering(203) 00:13:56.537 fused_ordering(204) 00:13:56.537 fused_ordering(205) 00:13:56.794 fused_ordering(206) 00:13:56.794 fused_ordering(207) 00:13:56.794 fused_ordering(208) 00:13:56.794 fused_ordering(209) 00:13:56.794 fused_ordering(210) 00:13:56.794 fused_ordering(211) 00:13:56.794 fused_ordering(212) 00:13:56.794 fused_ordering(213) 00:13:56.794 fused_ordering(214) 00:13:56.794 fused_ordering(215) 00:13:56.794 fused_ordering(216) 00:13:56.794 fused_ordering(217) 00:13:56.794 fused_ordering(218) 00:13:56.794 fused_ordering(219) 00:13:56.794 fused_ordering(220) 00:13:56.794 fused_ordering(221) 00:13:56.794 fused_ordering(222) 00:13:56.794 fused_ordering(223) 00:13:56.794 fused_ordering(224) 00:13:56.794 fused_ordering(225) 00:13:56.794 fused_ordering(226) 00:13:56.794 fused_ordering(227) 00:13:56.794 fused_ordering(228) 00:13:56.794 fused_ordering(229) 00:13:56.794 fused_ordering(230) 00:13:56.794 fused_ordering(231) 00:13:56.794 fused_ordering(232) 00:13:56.794 fused_ordering(233) 00:13:56.794 fused_ordering(234) 00:13:56.794 fused_ordering(235) 00:13:56.794 fused_ordering(236) 00:13:56.794 fused_ordering(237) 00:13:56.794 fused_ordering(238) 00:13:56.794 fused_ordering(239) 00:13:56.794 fused_ordering(240) 00:13:56.794 fused_ordering(241) 00:13:56.794 fused_ordering(242) 00:13:56.794 fused_ordering(243) 00:13:56.794 fused_ordering(244) 00:13:56.794 fused_ordering(245) 00:13:56.794 fused_ordering(246) 00:13:56.794 fused_ordering(247) 00:13:56.794 fused_ordering(248) 00:13:56.794 fused_ordering(249) 00:13:56.794 fused_ordering(250) 00:13:56.794 fused_ordering(251) 00:13:56.794 fused_ordering(252) 00:13:56.794 fused_ordering(253) 00:13:56.794 fused_ordering(254) 00:13:56.794 fused_ordering(255) 00:13:56.794 fused_ordering(256) 00:13:56.794 fused_ordering(257) 00:13:56.794 fused_ordering(258) 00:13:56.794 fused_ordering(259) 00:13:56.794 fused_ordering(260) 00:13:56.794 fused_ordering(261) 00:13:56.794 fused_ordering(262) 00:13:56.794 fused_ordering(263) 00:13:56.794 fused_ordering(264) 00:13:56.794 fused_ordering(265) 00:13:56.794 fused_ordering(266) 00:13:56.794 fused_ordering(267) 00:13:56.794 fused_ordering(268) 00:13:56.794 fused_ordering(269) 00:13:56.794 fused_ordering(270) 00:13:56.794 fused_ordering(271) 00:13:56.794 fused_ordering(272) 00:13:56.794 fused_ordering(273) 00:13:56.794 fused_ordering(274) 00:13:56.794 fused_ordering(275) 00:13:56.794 fused_ordering(276) 00:13:56.794 fused_ordering(277) 00:13:56.794 fused_ordering(278) 00:13:56.794 fused_ordering(279) 00:13:56.794 fused_ordering(280) 00:13:56.794 fused_ordering(281) 00:13:56.794 fused_ordering(282) 00:13:56.794 fused_ordering(283) 00:13:56.794 fused_ordering(284) 00:13:56.794 fused_ordering(285) 00:13:56.794 fused_ordering(286) 00:13:56.794 fused_ordering(287) 00:13:56.794 fused_ordering(288) 00:13:56.794 fused_ordering(289) 00:13:56.794 fused_ordering(290) 00:13:56.794 fused_ordering(291) 00:13:56.794 fused_ordering(292) 00:13:56.794 fused_ordering(293) 00:13:56.794 fused_ordering(294) 00:13:56.794 fused_ordering(295) 00:13:56.794 fused_ordering(296) 00:13:56.794 fused_ordering(297) 00:13:56.794 fused_ordering(298) 00:13:56.794 fused_ordering(299) 00:13:56.794 fused_ordering(300) 00:13:56.794 fused_ordering(301) 00:13:56.794 fused_ordering(302) 00:13:56.794 fused_ordering(303) 00:13:56.794 fused_ordering(304) 00:13:56.794 fused_ordering(305) 00:13:56.794 fused_ordering(306) 00:13:56.794 fused_ordering(307) 00:13:56.794 fused_ordering(308) 00:13:56.794 fused_ordering(309) 00:13:56.794 fused_ordering(310) 00:13:56.794 fused_ordering(311) 00:13:56.794 fused_ordering(312) 00:13:56.794 fused_ordering(313) 00:13:56.794 fused_ordering(314) 00:13:56.794 fused_ordering(315) 00:13:56.794 fused_ordering(316) 00:13:56.794 fused_ordering(317) 00:13:56.794 fused_ordering(318) 00:13:56.794 fused_ordering(319) 00:13:56.795 fused_ordering(320) 00:13:56.795 fused_ordering(321) 00:13:56.795 fused_ordering(322) 00:13:56.795 fused_ordering(323) 00:13:56.795 fused_ordering(324) 00:13:56.795 fused_ordering(325) 00:13:56.795 fused_ordering(326) 00:13:56.795 fused_ordering(327) 00:13:56.795 fused_ordering(328) 00:13:56.795 fused_ordering(329) 00:13:56.795 fused_ordering(330) 00:13:56.795 fused_ordering(331) 00:13:56.795 fused_ordering(332) 00:13:56.795 fused_ordering(333) 00:13:56.795 fused_ordering(334) 00:13:56.795 fused_ordering(335) 00:13:56.795 fused_ordering(336) 00:13:56.795 fused_ordering(337) 00:13:56.795 fused_ordering(338) 00:13:56.795 fused_ordering(339) 00:13:56.795 fused_ordering(340) 00:13:56.795 fused_ordering(341) 00:13:56.795 fused_ordering(342) 00:13:56.795 fused_ordering(343) 00:13:56.795 fused_ordering(344) 00:13:56.795 fused_ordering(345) 00:13:56.795 fused_ordering(346) 00:13:56.795 fused_ordering(347) 00:13:56.795 fused_ordering(348) 00:13:56.795 fused_ordering(349) 00:13:56.795 fused_ordering(350) 00:13:56.795 fused_ordering(351) 00:13:56.795 fused_ordering(352) 00:13:56.795 fused_ordering(353) 00:13:56.795 fused_ordering(354) 00:13:56.795 fused_ordering(355) 00:13:56.795 fused_ordering(356) 00:13:56.795 fused_ordering(357) 00:13:56.795 fused_ordering(358) 00:13:56.795 fused_ordering(359) 00:13:56.795 fused_ordering(360) 00:13:56.795 fused_ordering(361) 00:13:56.795 fused_ordering(362) 00:13:56.795 fused_ordering(363) 00:13:56.795 fused_ordering(364) 00:13:56.795 fused_ordering(365) 00:13:56.795 fused_ordering(366) 00:13:56.795 fused_ordering(367) 00:13:56.795 fused_ordering(368) 00:13:56.795 fused_ordering(369) 00:13:56.795 fused_ordering(370) 00:13:56.795 fused_ordering(371) 00:13:56.795 fused_ordering(372) 00:13:56.795 fused_ordering(373) 00:13:56.795 fused_ordering(374) 00:13:56.795 fused_ordering(375) 00:13:56.795 fused_ordering(376) 00:13:56.795 fused_ordering(377) 00:13:56.795 fused_ordering(378) 00:13:56.795 fused_ordering(379) 00:13:56.795 fused_ordering(380) 00:13:56.795 fused_ordering(381) 00:13:56.795 fused_ordering(382) 00:13:56.795 fused_ordering(383) 00:13:56.795 fused_ordering(384) 00:13:56.795 fused_ordering(385) 00:13:56.795 fused_ordering(386) 00:13:56.795 fused_ordering(387) 00:13:56.795 fused_ordering(388) 00:13:56.795 fused_ordering(389) 00:13:56.795 fused_ordering(390) 00:13:56.795 fused_ordering(391) 00:13:56.795 fused_ordering(392) 00:13:56.795 fused_ordering(393) 00:13:56.795 fused_ordering(394) 00:13:56.795 fused_ordering(395) 00:13:56.795 fused_ordering(396) 00:13:56.795 fused_ordering(397) 00:13:56.795 fused_ordering(398) 00:13:56.795 fused_ordering(399) 00:13:56.795 fused_ordering(400) 00:13:56.795 fused_ordering(401) 00:13:56.795 fused_ordering(402) 00:13:56.795 fused_ordering(403) 00:13:56.795 fused_ordering(404) 00:13:56.795 fused_ordering(405) 00:13:56.795 fused_ordering(406) 00:13:56.795 fused_ordering(407) 00:13:56.795 fused_ordering(408) 00:13:56.795 fused_ordering(409) 00:13:56.795 fused_ordering(410) 00:13:57.359 fused_ordering(411) 00:13:57.359 fused_ordering(412) 00:13:57.359 fused_ordering(413) 00:13:57.359 fused_ordering(414) 00:13:57.359 fused_ordering(415) 00:13:57.359 fused_ordering(416) 00:13:57.359 fused_ordering(417) 00:13:57.359 fused_ordering(418) 00:13:57.359 fused_ordering(419) 00:13:57.359 fused_ordering(420) 00:13:57.359 fused_ordering(421) 00:13:57.359 fused_ordering(422) 00:13:57.359 fused_ordering(423) 00:13:57.359 fused_ordering(424) 00:13:57.359 fused_ordering(425) 00:13:57.359 fused_ordering(426) 00:13:57.359 fused_ordering(427) 00:13:57.359 fused_ordering(428) 00:13:57.359 fused_ordering(429) 00:13:57.359 fused_ordering(430) 00:13:57.359 fused_ordering(431) 00:13:57.359 fused_ordering(432) 00:13:57.359 fused_ordering(433) 00:13:57.360 fused_ordering(434) 00:13:57.360 fused_ordering(435) 00:13:57.360 fused_ordering(436) 00:13:57.360 fused_ordering(437) 00:13:57.360 fused_ordering(438) 00:13:57.360 fused_ordering(439) 00:13:57.360 fused_ordering(440) 00:13:57.360 fused_ordering(441) 00:13:57.360 fused_ordering(442) 00:13:57.360 fused_ordering(443) 00:13:57.360 fused_ordering(444) 00:13:57.360 fused_ordering(445) 00:13:57.360 fused_ordering(446) 00:13:57.360 fused_ordering(447) 00:13:57.360 fused_ordering(448) 00:13:57.360 fused_ordering(449) 00:13:57.360 fused_ordering(450) 00:13:57.360 fused_ordering(451) 00:13:57.360 fused_ordering(452) 00:13:57.360 fused_ordering(453) 00:13:57.360 fused_ordering(454) 00:13:57.360 fused_ordering(455) 00:13:57.360 fused_ordering(456) 00:13:57.360 fused_ordering(457) 00:13:57.360 fused_ordering(458) 00:13:57.360 fused_ordering(459) 00:13:57.360 fused_ordering(460) 00:13:57.360 fused_ordering(461) 00:13:57.360 fused_ordering(462) 00:13:57.360 fused_ordering(463) 00:13:57.360 fused_ordering(464) 00:13:57.360 fused_ordering(465) 00:13:57.360 fused_ordering(466) 00:13:57.360 fused_ordering(467) 00:13:57.360 fused_ordering(468) 00:13:57.360 fused_ordering(469) 00:13:57.360 fused_ordering(470) 00:13:57.360 fused_ordering(471) 00:13:57.360 fused_ordering(472) 00:13:57.360 fused_ordering(473) 00:13:57.360 fused_ordering(474) 00:13:57.360 fused_ordering(475) 00:13:57.360 fused_ordering(476) 00:13:57.360 fused_ordering(477) 00:13:57.360 fused_ordering(478) 00:13:57.360 fused_ordering(479) 00:13:57.360 fused_ordering(480) 00:13:57.360 fused_ordering(481) 00:13:57.360 fused_ordering(482) 00:13:57.360 fused_ordering(483) 00:13:57.360 fused_ordering(484) 00:13:57.360 fused_ordering(485) 00:13:57.360 fused_ordering(486) 00:13:57.360 fused_ordering(487) 00:13:57.360 fused_ordering(488) 00:13:57.360 fused_ordering(489) 00:13:57.360 fused_ordering(490) 00:13:57.360 fused_ordering(491) 00:13:57.360 fused_ordering(492) 00:13:57.360 fused_ordering(493) 00:13:57.360 fused_ordering(494) 00:13:57.360 fused_ordering(495) 00:13:57.360 fused_ordering(496) 00:13:57.360 fused_ordering(497) 00:13:57.360 fused_ordering(498) 00:13:57.360 fused_ordering(499) 00:13:57.360 fused_ordering(500) 00:13:57.360 fused_ordering(501) 00:13:57.360 fused_ordering(502) 00:13:57.360 fused_ordering(503) 00:13:57.360 fused_ordering(504) 00:13:57.360 fused_ordering(505) 00:13:57.360 fused_ordering(506) 00:13:57.360 fused_ordering(507) 00:13:57.360 fused_ordering(508) 00:13:57.360 fused_ordering(509) 00:13:57.360 fused_ordering(510) 00:13:57.360 fused_ordering(511) 00:13:57.360 fused_ordering(512) 00:13:57.360 fused_ordering(513) 00:13:57.360 fused_ordering(514) 00:13:57.360 fused_ordering(515) 00:13:57.360 fused_ordering(516) 00:13:57.360 fused_ordering(517) 00:13:57.360 fused_ordering(518) 00:13:57.360 fused_ordering(519) 00:13:57.360 fused_ordering(520) 00:13:57.360 fused_ordering(521) 00:13:57.360 fused_ordering(522) 00:13:57.360 fused_ordering(523) 00:13:57.360 fused_ordering(524) 00:13:57.360 fused_ordering(525) 00:13:57.360 fused_ordering(526) 00:13:57.360 fused_ordering(527) 00:13:57.360 fused_ordering(528) 00:13:57.360 fused_ordering(529) 00:13:57.360 fused_ordering(530) 00:13:57.360 fused_ordering(531) 00:13:57.360 fused_ordering(532) 00:13:57.360 fused_ordering(533) 00:13:57.360 fused_ordering(534) 00:13:57.360 fused_ordering(535) 00:13:57.360 fused_ordering(536) 00:13:57.360 fused_ordering(537) 00:13:57.360 fused_ordering(538) 00:13:57.360 fused_ordering(539) 00:13:57.360 fused_ordering(540) 00:13:57.360 fused_ordering(541) 00:13:57.360 fused_ordering(542) 00:13:57.360 fused_ordering(543) 00:13:57.360 fused_ordering(544) 00:13:57.360 fused_ordering(545) 00:13:57.360 fused_ordering(546) 00:13:57.360 fused_ordering(547) 00:13:57.360 fused_ordering(548) 00:13:57.360 fused_ordering(549) 00:13:57.360 fused_ordering(550) 00:13:57.360 fused_ordering(551) 00:13:57.360 fused_ordering(552) 00:13:57.360 fused_ordering(553) 00:13:57.360 fused_ordering(554) 00:13:57.360 fused_ordering(555) 00:13:57.360 fused_ordering(556) 00:13:57.360 fused_ordering(557) 00:13:57.360 fused_ordering(558) 00:13:57.360 fused_ordering(559) 00:13:57.360 fused_ordering(560) 00:13:57.360 fused_ordering(561) 00:13:57.360 fused_ordering(562) 00:13:57.360 fused_ordering(563) 00:13:57.360 fused_ordering(564) 00:13:57.360 fused_ordering(565) 00:13:57.360 fused_ordering(566) 00:13:57.360 fused_ordering(567) 00:13:57.360 fused_ordering(568) 00:13:57.360 fused_ordering(569) 00:13:57.360 fused_ordering(570) 00:13:57.360 fused_ordering(571) 00:13:57.360 fused_ordering(572) 00:13:57.360 fused_ordering(573) 00:13:57.360 fused_ordering(574) 00:13:57.360 fused_ordering(575) 00:13:57.360 fused_ordering(576) 00:13:57.360 fused_ordering(577) 00:13:57.360 fused_ordering(578) 00:13:57.360 fused_ordering(579) 00:13:57.360 fused_ordering(580) 00:13:57.360 fused_ordering(581) 00:13:57.360 fused_ordering(582) 00:13:57.360 fused_ordering(583) 00:13:57.360 fused_ordering(584) 00:13:57.360 fused_ordering(585) 00:13:57.360 fused_ordering(586) 00:13:57.360 fused_ordering(587) 00:13:57.360 fused_ordering(588) 00:13:57.360 fused_ordering(589) 00:13:57.360 fused_ordering(590) 00:13:57.360 fused_ordering(591) 00:13:57.360 fused_ordering(592) 00:13:57.360 fused_ordering(593) 00:13:57.360 fused_ordering(594) 00:13:57.360 fused_ordering(595) 00:13:57.360 fused_ordering(596) 00:13:57.360 fused_ordering(597) 00:13:57.360 fused_ordering(598) 00:13:57.360 fused_ordering(599) 00:13:57.360 fused_ordering(600) 00:13:57.360 fused_ordering(601) 00:13:57.360 fused_ordering(602) 00:13:57.360 fused_ordering(603) 00:13:57.360 fused_ordering(604) 00:13:57.360 fused_ordering(605) 00:13:57.360 fused_ordering(606) 00:13:57.360 fused_ordering(607) 00:13:57.360 fused_ordering(608) 00:13:57.360 fused_ordering(609) 00:13:57.360 fused_ordering(610) 00:13:57.360 fused_ordering(611) 00:13:57.360 fused_ordering(612) 00:13:57.360 fused_ordering(613) 00:13:57.360 fused_ordering(614) 00:13:57.360 fused_ordering(615) 00:13:57.926 fused_ordering(616) 00:13:57.926 fused_ordering(617) 00:13:57.926 fused_ordering(618) 00:13:57.926 fused_ordering(619) 00:13:57.926 fused_ordering(620) 00:13:57.926 fused_ordering(621) 00:13:57.926 fused_ordering(622) 00:13:57.926 fused_ordering(623) 00:13:57.926 fused_ordering(624) 00:13:57.926 fused_ordering(625) 00:13:57.926 fused_ordering(626) 00:13:57.926 fused_ordering(627) 00:13:57.926 fused_ordering(628) 00:13:57.926 fused_ordering(629) 00:13:57.926 fused_ordering(630) 00:13:57.926 fused_ordering(631) 00:13:57.926 fused_ordering(632) 00:13:57.926 fused_ordering(633) 00:13:57.926 fused_ordering(634) 00:13:57.926 fused_ordering(635) 00:13:57.926 fused_ordering(636) 00:13:57.926 fused_ordering(637) 00:13:57.926 fused_ordering(638) 00:13:57.926 fused_ordering(639) 00:13:57.926 fused_ordering(640) 00:13:57.926 fused_ordering(641) 00:13:57.926 fused_ordering(642) 00:13:57.926 fused_ordering(643) 00:13:57.926 fused_ordering(644) 00:13:57.926 fused_ordering(645) 00:13:57.926 fused_ordering(646) 00:13:57.926 fused_ordering(647) 00:13:57.926 fused_ordering(648) 00:13:57.926 fused_ordering(649) 00:13:57.926 fused_ordering(650) 00:13:57.926 fused_ordering(651) 00:13:57.926 fused_ordering(652) 00:13:57.926 fused_ordering(653) 00:13:57.926 fused_ordering(654) 00:13:57.926 fused_ordering(655) 00:13:57.926 fused_ordering(656) 00:13:57.926 fused_ordering(657) 00:13:57.926 fused_ordering(658) 00:13:57.926 fused_ordering(659) 00:13:57.926 fused_ordering(660) 00:13:57.926 fused_ordering(661) 00:13:57.926 fused_ordering(662) 00:13:57.926 fused_ordering(663) 00:13:57.926 fused_ordering(664) 00:13:57.926 fused_ordering(665) 00:13:57.926 fused_ordering(666) 00:13:57.926 fused_ordering(667) 00:13:57.926 fused_ordering(668) 00:13:57.926 fused_ordering(669) 00:13:57.926 fused_ordering(670) 00:13:57.926 fused_ordering(671) 00:13:57.926 fused_ordering(672) 00:13:57.926 fused_ordering(673) 00:13:57.926 fused_ordering(674) 00:13:57.926 fused_ordering(675) 00:13:57.926 fused_ordering(676) 00:13:57.926 fused_ordering(677) 00:13:57.926 fused_ordering(678) 00:13:57.926 fused_ordering(679) 00:13:57.926 fused_ordering(680) 00:13:57.926 fused_ordering(681) 00:13:57.926 fused_ordering(682) 00:13:57.926 fused_ordering(683) 00:13:57.926 fused_ordering(684) 00:13:57.926 fused_ordering(685) 00:13:57.926 fused_ordering(686) 00:13:57.926 fused_ordering(687) 00:13:57.926 fused_ordering(688) 00:13:57.926 fused_ordering(689) 00:13:57.926 fused_ordering(690) 00:13:57.926 fused_ordering(691) 00:13:57.926 fused_ordering(692) 00:13:57.926 fused_ordering(693) 00:13:57.926 fused_ordering(694) 00:13:57.926 fused_ordering(695) 00:13:57.926 fused_ordering(696) 00:13:57.926 fused_ordering(697) 00:13:57.926 fused_ordering(698) 00:13:57.926 fused_ordering(699) 00:13:57.926 fused_ordering(700) 00:13:57.926 fused_ordering(701) 00:13:57.926 fused_ordering(702) 00:13:57.926 fused_ordering(703) 00:13:57.926 fused_ordering(704) 00:13:57.926 fused_ordering(705) 00:13:57.926 fused_ordering(706) 00:13:57.926 fused_ordering(707) 00:13:57.926 fused_ordering(708) 00:13:57.926 fused_ordering(709) 00:13:57.926 fused_ordering(710) 00:13:57.926 fused_ordering(711) 00:13:57.926 fused_ordering(712) 00:13:57.926 fused_ordering(713) 00:13:57.926 fused_ordering(714) 00:13:57.926 fused_ordering(715) 00:13:57.926 fused_ordering(716) 00:13:57.926 fused_ordering(717) 00:13:57.926 fused_ordering(718) 00:13:57.926 fused_ordering(719) 00:13:57.926 fused_ordering(720) 00:13:57.926 fused_ordering(721) 00:13:57.926 fused_ordering(722) 00:13:57.926 fused_ordering(723) 00:13:57.926 fused_ordering(724) 00:13:57.926 fused_ordering(725) 00:13:57.926 fused_ordering(726) 00:13:57.926 fused_ordering(727) 00:13:57.926 fused_ordering(728) 00:13:57.926 fused_ordering(729) 00:13:57.926 fused_ordering(730) 00:13:57.926 fused_ordering(731) 00:13:57.926 fused_ordering(732) 00:13:57.926 fused_ordering(733) 00:13:57.926 fused_ordering(734) 00:13:57.926 fused_ordering(735) 00:13:57.926 fused_ordering(736) 00:13:57.926 fused_ordering(737) 00:13:57.926 fused_ordering(738) 00:13:57.926 fused_ordering(739) 00:13:57.926 fused_ordering(740) 00:13:57.926 fused_ordering(741) 00:13:57.926 fused_ordering(742) 00:13:57.926 fused_ordering(743) 00:13:57.926 fused_ordering(744) 00:13:57.926 fused_ordering(745) 00:13:57.926 fused_ordering(746) 00:13:57.926 fused_ordering(747) 00:13:57.926 fused_ordering(748) 00:13:57.926 fused_ordering(749) 00:13:57.926 fused_ordering(750) 00:13:57.926 fused_ordering(751) 00:13:57.926 fused_ordering(752) 00:13:57.926 fused_ordering(753) 00:13:57.926 fused_ordering(754) 00:13:57.926 fused_ordering(755) 00:13:57.926 fused_ordering(756) 00:13:57.926 fused_ordering(757) 00:13:57.926 fused_ordering(758) 00:13:57.926 fused_ordering(759) 00:13:57.926 fused_ordering(760) 00:13:57.926 fused_ordering(761) 00:13:57.926 fused_ordering(762) 00:13:57.926 fused_ordering(763) 00:13:57.926 fused_ordering(764) 00:13:57.926 fused_ordering(765) 00:13:57.926 fused_ordering(766) 00:13:57.926 fused_ordering(767) 00:13:57.926 fused_ordering(768) 00:13:57.926 fused_ordering(769) 00:13:57.926 fused_ordering(770) 00:13:57.926 fused_ordering(771) 00:13:57.926 fused_ordering(772) 00:13:57.926 fused_ordering(773) 00:13:57.926 fused_ordering(774) 00:13:57.926 fused_ordering(775) 00:13:57.926 fused_ordering(776) 00:13:57.926 fused_ordering(777) 00:13:57.926 fused_ordering(778) 00:13:57.926 fused_ordering(779) 00:13:57.926 fused_ordering(780) 00:13:57.926 fused_ordering(781) 00:13:57.926 fused_ordering(782) 00:13:57.926 fused_ordering(783) 00:13:57.926 fused_ordering(784) 00:13:57.926 fused_ordering(785) 00:13:57.926 fused_ordering(786) 00:13:57.926 fused_ordering(787) 00:13:57.926 fused_ordering(788) 00:13:57.926 fused_ordering(789) 00:13:57.926 fused_ordering(790) 00:13:57.926 fused_ordering(791) 00:13:57.926 fused_ordering(792) 00:13:57.926 fused_ordering(793) 00:13:57.926 fused_ordering(794) 00:13:57.926 fused_ordering(795) 00:13:57.926 fused_ordering(796) 00:13:57.926 fused_ordering(797) 00:13:57.926 fused_ordering(798) 00:13:57.926 fused_ordering(799) 00:13:57.926 fused_ordering(800) 00:13:57.926 fused_ordering(801) 00:13:57.926 fused_ordering(802) 00:13:57.926 fused_ordering(803) 00:13:57.926 fused_ordering(804) 00:13:57.926 fused_ordering(805) 00:13:57.926 fused_ordering(806) 00:13:57.926 fused_ordering(807) 00:13:57.926 fused_ordering(808) 00:13:57.926 fused_ordering(809) 00:13:57.926 fused_ordering(810) 00:13:57.926 fused_ordering(811) 00:13:57.926 fused_ordering(812) 00:13:57.926 fused_ordering(813) 00:13:57.926 fused_ordering(814) 00:13:57.926 fused_ordering(815) 00:13:57.926 fused_ordering(816) 00:13:57.926 fused_ordering(817) 00:13:57.926 fused_ordering(818) 00:13:57.926 fused_ordering(819) 00:13:57.926 fused_ordering(820) 00:13:58.861 fused_ordering(821) 00:13:58.861 fused_ordering(822) 00:13:58.861 fused_ordering(823) 00:13:58.861 fused_ordering(824) 00:13:58.861 fused_ordering(825) 00:13:58.861 fused_ordering(826) 00:13:58.861 fused_ordering(827) 00:13:58.861 fused_ordering(828) 00:13:58.861 fused_ordering(829) 00:13:58.861 fused_ordering(830) 00:13:58.861 fused_ordering(831) 00:13:58.861 fused_ordering(832) 00:13:58.861 fused_ordering(833) 00:13:58.861 fused_ordering(834) 00:13:58.861 fused_ordering(835) 00:13:58.861 fused_ordering(836) 00:13:58.861 fused_ordering(837) 00:13:58.861 fused_ordering(838) 00:13:58.861 fused_ordering(839) 00:13:58.861 fused_ordering(840) 00:13:58.861 fused_ordering(841) 00:13:58.861 fused_ordering(842) 00:13:58.861 fused_ordering(843) 00:13:58.861 fused_ordering(844) 00:13:58.861 fused_ordering(845) 00:13:58.861 fused_ordering(846) 00:13:58.861 fused_ordering(847) 00:13:58.861 fused_ordering(848) 00:13:58.861 fused_ordering(849) 00:13:58.861 fused_ordering(850) 00:13:58.861 fused_ordering(851) 00:13:58.861 fused_ordering(852) 00:13:58.861 fused_ordering(853) 00:13:58.861 fused_ordering(854) 00:13:58.861 fused_ordering(855) 00:13:58.861 fused_ordering(856) 00:13:58.861 fused_ordering(857) 00:13:58.861 fused_ordering(858) 00:13:58.861 fused_ordering(859) 00:13:58.861 fused_ordering(860) 00:13:58.861 fused_ordering(861) 00:13:58.861 fused_ordering(862) 00:13:58.861 fused_ordering(863) 00:13:58.861 fused_ordering(864) 00:13:58.861 fused_ordering(865) 00:13:58.861 fused_ordering(866) 00:13:58.861 fused_ordering(867) 00:13:58.861 fused_ordering(868) 00:13:58.861 fused_ordering(869) 00:13:58.861 fused_ordering(870) 00:13:58.861 fused_ordering(871) 00:13:58.861 fused_ordering(872) 00:13:58.861 fused_ordering(873) 00:13:58.861 fused_ordering(874) 00:13:58.861 fused_ordering(875) 00:13:58.861 fused_ordering(876) 00:13:58.861 fused_ordering(877) 00:13:58.861 fused_ordering(878) 00:13:58.861 fused_ordering(879) 00:13:58.861 fused_ordering(880) 00:13:58.861 fused_ordering(881) 00:13:58.861 fused_ordering(882) 00:13:58.861 fused_ordering(883) 00:13:58.861 fused_ordering(884) 00:13:58.861 fused_ordering(885) 00:13:58.861 fused_ordering(886) 00:13:58.861 fused_ordering(887) 00:13:58.861 fused_ordering(888) 00:13:58.861 fused_ordering(889) 00:13:58.861 fused_ordering(890) 00:13:58.861 fused_ordering(891) 00:13:58.861 fused_ordering(892) 00:13:58.861 fused_ordering(893) 00:13:58.861 fused_ordering(894) 00:13:58.861 fused_ordering(895) 00:13:58.861 fused_ordering(896) 00:13:58.861 fused_ordering(897) 00:13:58.861 fused_ordering(898) 00:13:58.861 fused_ordering(899) 00:13:58.861 fused_ordering(900) 00:13:58.861 fused_ordering(901) 00:13:58.861 fused_ordering(902) 00:13:58.861 fused_ordering(903) 00:13:58.861 fused_ordering(904) 00:13:58.861 fused_ordering(905) 00:13:58.861 fused_ordering(906) 00:13:58.861 fused_ordering(907) 00:13:58.861 fused_ordering(908) 00:13:58.861 fused_ordering(909) 00:13:58.861 fused_ordering(910) 00:13:58.861 fused_ordering(911) 00:13:58.861 fused_ordering(912) 00:13:58.861 fused_ordering(913) 00:13:58.861 fused_ordering(914) 00:13:58.861 fused_ordering(915) 00:13:58.861 fused_ordering(916) 00:13:58.861 fused_ordering(917) 00:13:58.861 fused_ordering(918) 00:13:58.861 fused_ordering(919) 00:13:58.861 fused_ordering(920) 00:13:58.861 fused_ordering(921) 00:13:58.861 fused_ordering(922) 00:13:58.861 fused_ordering(923) 00:13:58.861 fused_ordering(924) 00:13:58.861 fused_ordering(925) 00:13:58.861 fused_ordering(926) 00:13:58.861 fused_ordering(927) 00:13:58.861 fused_ordering(928) 00:13:58.861 fused_ordering(929) 00:13:58.861 fused_ordering(930) 00:13:58.861 fused_ordering(931) 00:13:58.861 fused_ordering(932) 00:13:58.861 fused_ordering(933) 00:13:58.861 fused_ordering(934) 00:13:58.861 fused_ordering(935) 00:13:58.861 fused_ordering(936) 00:13:58.861 fused_ordering(937) 00:13:58.861 fused_ordering(938) 00:13:58.861 fused_ordering(939) 00:13:58.861 fused_ordering(940) 00:13:58.861 fused_ordering(941) 00:13:58.861 fused_ordering(942) 00:13:58.861 fused_ordering(943) 00:13:58.861 fused_ordering(944) 00:13:58.861 fused_ordering(945) 00:13:58.861 fused_ordering(946) 00:13:58.861 fused_ordering(947) 00:13:58.861 fused_ordering(948) 00:13:58.861 fused_ordering(949) 00:13:58.861 fused_ordering(950) 00:13:58.861 fused_ordering(951) 00:13:58.861 fused_ordering(952) 00:13:58.861 fused_ordering(953) 00:13:58.861 fused_ordering(954) 00:13:58.861 fused_ordering(955) 00:13:58.861 fused_ordering(956) 00:13:58.861 fused_ordering(957) 00:13:58.862 fused_ordering(958) 00:13:58.862 fused_ordering(959) 00:13:58.862 fused_ordering(960) 00:13:58.862 fused_ordering(961) 00:13:58.862 fused_ordering(962) 00:13:58.862 fused_ordering(963) 00:13:58.862 fused_ordering(964) 00:13:58.862 fused_ordering(965) 00:13:58.862 fused_ordering(966) 00:13:58.862 fused_ordering(967) 00:13:58.862 fused_ordering(968) 00:13:58.862 fused_ordering(969) 00:13:58.862 fused_ordering(970) 00:13:58.862 fused_ordering(971) 00:13:58.862 fused_ordering(972) 00:13:58.862 fused_ordering(973) 00:13:58.862 fused_ordering(974) 00:13:58.862 fused_ordering(975) 00:13:58.862 fused_ordering(976) 00:13:58.862 fused_ordering(977) 00:13:58.862 fused_ordering(978) 00:13:58.862 fused_ordering(979) 00:13:58.862 fused_ordering(980) 00:13:58.862 fused_ordering(981) 00:13:58.862 fused_ordering(982) 00:13:58.862 fused_ordering(983) 00:13:58.862 fused_ordering(984) 00:13:58.862 fused_ordering(985) 00:13:58.862 fused_ordering(986) 00:13:58.862 fused_ordering(987) 00:13:58.862 fused_ordering(988) 00:13:58.862 fused_ordering(989) 00:13:58.862 fused_ordering(990) 00:13:58.862 fused_ordering(991) 00:13:58.862 fused_ordering(992) 00:13:58.862 fused_ordering(993) 00:13:58.862 fused_ordering(994) 00:13:58.862 fused_ordering(995) 00:13:58.862 fused_ordering(996) 00:13:58.862 fused_ordering(997) 00:13:58.862 fused_ordering(998) 00:13:58.862 fused_ordering(999) 00:13:58.862 fused_ordering(1000) 00:13:58.862 fused_ordering(1001) 00:13:58.862 fused_ordering(1002) 00:13:58.862 fused_ordering(1003) 00:13:58.862 fused_ordering(1004) 00:13:58.862 fused_ordering(1005) 00:13:58.862 fused_ordering(1006) 00:13:58.862 fused_ordering(1007) 00:13:58.862 fused_ordering(1008) 00:13:58.862 fused_ordering(1009) 00:13:58.862 fused_ordering(1010) 00:13:58.862 fused_ordering(1011) 00:13:58.862 fused_ordering(1012) 00:13:58.862 fused_ordering(1013) 00:13:58.862 fused_ordering(1014) 00:13:58.862 fused_ordering(1015) 00:13:58.862 fused_ordering(1016) 00:13:58.862 fused_ordering(1017) 00:13:58.862 fused_ordering(1018) 00:13:58.862 fused_ordering(1019) 00:13:58.862 fused_ordering(1020) 00:13:58.862 fused_ordering(1021) 00:13:58.862 fused_ordering(1022) 00:13:58.862 fused_ordering(1023) 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.862 rmmod nvme_tcp 00:13:58.862 rmmod nvme_fabrics 00:13:58.862 rmmod nvme_keyring 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1224506 ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1224506 ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1224506' 00:13:58.862 killing process with pid 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1224506 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.862 23:57:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.398 23:57:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.398 00:14:01.398 real 0m7.099s 00:14:01.398 user 0m5.141s 00:14:01.398 sys 0m2.830s 00:14:01.398 23:57:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.398 23:57:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.398 ************************************ 00:14:01.398 END TEST nvmf_fused_ordering 00:14:01.398 ************************************ 00:14:01.398 23:57:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:01.398 23:57:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:01.398 23:57:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:01.398 23:57:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.398 ************************************ 00:14:01.398 START TEST nvmf_delete_subsystem 00:14:01.398 ************************************ 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:01.398 * Looking for test storage... 00:14:01.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.398 23:57:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:02.772 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:02.772 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.772 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:02.773 Found net devices under 0000:08:00.0: cvl_0_0 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:02.773 Found net devices under 0000:08:00.1: cvl_0_1 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:14:02.773 00:14:02.773 --- 10.0.0.2 ping statistics --- 00:14:02.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.773 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:02.773 00:14:02.773 --- 10.0.0.1 ping statistics --- 00:14:02.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.773 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1226218 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1226218 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1226218 ']' 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.773 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.773 [2024-07-15 23:57:37.214552] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:02.773 [2024-07-15 23:57:37.214660] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.773 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.773 [2024-07-15 23:57:37.280205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:03.031 [2024-07-15 23:57:37.366944] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.031 [2024-07-15 23:57:37.367008] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.031 [2024-07-15 23:57:37.367025] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.031 [2024-07-15 23:57:37.367039] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.031 [2024-07-15 23:57:37.367051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.031 [2024-07-15 23:57:37.367147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.031 [2024-07-15 23:57:37.367151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 [2024-07-15 23:57:37.501754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 [2024-07-15 23:57:37.517917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 NULL1 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.031 Delay0 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.031 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:03.288 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.288 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1226302 00:14:03.288 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:03.288 23:57:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:03.288 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.288 [2024-07-15 23:57:37.602695] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:05.187 23:57:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.187 23:57:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.187 23:57:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 [2024-07-15 23:57:39.645717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff088000c00 is same with the state(5) to be set 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Write completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 starting I/O failed: -6 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.187 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 starting I/O failed: -6 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 starting I/O failed: -6 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 starting I/O failed: -6 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 starting I/O failed: -6 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 starting I/O failed: -6 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 [2024-07-15 23:57:39.646622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1090 is same with the state(5) to be set 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:05.188 Read completed with error (sct=0, sc=8) 00:14:05.188 Write completed with error (sct=0, sc=8) 00:14:06.121 [2024-07-15 23:57:40.620324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c47c0 is same with the state(5) to be set 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 [2024-07-15 23:57:40.646505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c19b0 is same with the state(5) to be set 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 [2024-07-15 23:57:40.646708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1270 is same with the state(5) to be set 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 [2024-07-15 23:57:40.648271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff08800c600 is same with the state(5) to be set 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Read completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 Write completed with error (sct=0, sc=8) 00:14:06.379 [2024-07-15 23:57:40.648897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff08800bfe0 is same with the state(5) to be set 00:14:06.379 Initializing NVMe Controllers 00:14:06.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.379 Controller IO queue size 128, less than required. 00:14:06.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:06.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:06.379 Initialization complete. Launching workers. 00:14:06.379 ======================================================== 00:14:06.379 Latency(us) 00:14:06.379 Device Information : IOPS MiB/s Average min max 00:14:06.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.14 0.08 904800.06 456.95 1014389.21 00:14:06.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.69 0.08 935181.15 738.81 2003703.86 00:14:06.379 ======================================================== 00:14:06.379 Total : 325.82 0.16 919690.03 456.95 2003703.86 00:14:06.379 00:14:06.379 [2024-07-15 23:57:40.649341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c47c0 (9): Bad file descriptor 00:14:06.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:06.379 23:57:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.379 23:57:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:06.379 23:57:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226302 00:14:06.379 23:57:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226302 00:14:06.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1226302) - No such process 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1226302 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1226302 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1226302 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:06.945 [2024-07-15 23:57:41.174085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1226595 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.945 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:06.945 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.945 [2024-07-15 23:57:41.234928] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:07.201 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.201 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:07.201 23:57:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.765 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:07.765 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:07.765 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.330 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.330 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:08.330 23:57:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.903 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.903 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:08.903 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.465 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.466 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:09.466 23:57:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.724 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.724 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:09.724 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.022 Initializing NVMe Controllers 00:14:10.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.022 Controller IO queue size 128, less than required. 00:14:10.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:10.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:10.022 Initialization complete. Launching workers. 00:14:10.022 ======================================================== 00:14:10.022 Latency(us) 00:14:10.022 Device Information : IOPS MiB/s Average min max 00:14:10.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004344.13 1000178.16 1013570.74 00:14:10.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005451.12 1000232.87 1042393.44 00:14:10.022 ======================================================== 00:14:10.022 Total : 256.00 0.12 1004897.62 1000178.16 1042393.44 00:14:10.022 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1226595 00:14:10.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1226595) - No such process 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1226595 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.280 rmmod nvme_tcp 00:14:10.280 rmmod nvme_fabrics 00:14:10.280 rmmod nvme_keyring 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1226218 ']' 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1226218 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1226218 ']' 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1226218 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1226218 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1226218' 00:14:10.280 killing process with pid 1226218 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1226218 00:14:10.280 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1226218 00:14:10.540 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.541 23:57:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.077 23:57:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.077 00:14:13.077 real 0m11.608s 00:14:13.077 user 0m27.243s 00:14:13.077 sys 0m2.598s 00:14:13.077 23:57:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.077 23:57:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:13.077 ************************************ 00:14:13.077 END TEST nvmf_delete_subsystem 00:14:13.077 ************************************ 00:14:13.077 23:57:47 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.077 23:57:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:13.077 23:57:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.077 23:57:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.077 ************************************ 00:14:13.077 START TEST nvmf_ns_masking 00:14:13.077 ************************************ 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.077 * Looking for test storage... 00:14:13.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.077 23:57:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=c514fec1-4498-4dd1-bfe3-6669b89f6c0d 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.078 23:57:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:14.453 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:14.453 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:14.453 Found net devices under 0000:08:00.0: cvl_0_0 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:14.453 Found net devices under 0000:08:00.1: cvl_0_1 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.453 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:14:14.454 00:14:14.454 --- 10.0.0.2 ping statistics --- 00:14:14.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.454 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:14.454 00:14:14.454 --- 10.0.0.1 ping statistics --- 00:14:14.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.454 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1228368 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1228368 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1228368 ']' 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.454 23:57:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.454 [2024-07-15 23:57:48.859902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:14.454 [2024-07-15 23:57:48.860014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.454 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.454 [2024-07-15 23:57:48.928542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.712 [2024-07-15 23:57:49.020534] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.712 [2024-07-15 23:57:49.020593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.712 [2024-07-15 23:57:49.020610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.712 [2024-07-15 23:57:49.020623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.712 [2024-07-15 23:57:49.020635] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.712 [2024-07-15 23:57:49.020699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.712 [2024-07-15 23:57:49.020750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.712 [2024-07-15 23:57:49.020777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.712 [2024-07-15 23:57:49.020780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.712 23:57:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:14.969 [2024-07-15 23:57:49.444682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.969 23:57:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:14.969 23:57:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:14.969 23:57:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.532 Malloc1 00:14:15.532 23:57:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.789 Malloc2 00:14:15.789 23:57:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.045 23:57:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:16.302 23:57:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.559 [2024-07-15 23:57:50.956079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.559 23:57:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:16.559 23:57:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c514fec1-4498-4dd1-bfe3-6669b89f6c0d -a 10.0.0.2 -s 4420 -i 4 00:14:16.816 23:57:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.816 23:57:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:16.816 23:57:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.816 23:57:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:16.816 23:57:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:18.712 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:18.968 [ 0]:0x1 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=63aab5137f8247fdbfabeb8e0320f6a4 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 63aab5137f8247fdbfabeb8e0320f6a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.968 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:19.224 [ 0]:0x1 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=63aab5137f8247fdbfabeb8e0320f6a4 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 63aab5137f8247fdbfabeb8e0320f6a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:19.224 [ 1]:0x2 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.224 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.481 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:19.481 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.481 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:19.481 23:57:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.736 23:57:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.992 23:57:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:20.249 23:57:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:20.249 23:57:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c514fec1-4498-4dd1-bfe3-6669b89f6c0d -a 10.0.0.2 -s 4420 -i 4 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:20.506 23:57:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.401 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:22.658 23:57:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:22.659 [ 0]:0x2 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.659 23:57:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:22.917 [ 0]:0x1 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=63aab5137f8247fdbfabeb8e0320f6a4 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 63aab5137f8247fdbfabeb8e0320f6a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:22.917 [ 1]:0x2 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.917 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:23.176 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:23.435 [ 0]:0x2 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.435 23:57:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.693 23:57:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:23.693 23:57:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c514fec1-4498-4dd1-bfe3-6669b89f6c0d -a 10.0.0.2 -s 4420 -i 4 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:23.951 23:57:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:25.850 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:26.108 [ 0]:0x1 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=63aab5137f8247fdbfabeb8e0320f6a4 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 63aab5137f8247fdbfabeb8e0320f6a4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:26.108 [ 1]:0x2 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.108 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:26.367 [ 0]:0x2 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.367 23:58:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:26.625 [2024-07-15 23:58:01.121191] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:26.625 request: 00:14:26.625 { 00:14:26.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.625 "nsid": 2, 00:14:26.625 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.625 "method": "nvmf_ns_remove_host", 00:14:26.625 "req_id": 1 00:14:26.625 } 00:14:26.625 Got JSON-RPC error response 00:14:26.625 response: 00:14:26.625 { 00:14:26.625 "code": -32602, 00:14:26.625 "message": "Invalid parameters" 00:14:26.625 } 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:26.970 [ 0]:0x2 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=24531f96409347daa1d445368ac7411d 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 24531f96409347daa1d445368ac7411d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.970 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.229 rmmod nvme_tcp 00:14:27.229 rmmod nvme_fabrics 00:14:27.229 rmmod nvme_keyring 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1228368 ']' 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1228368 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1228368 ']' 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1228368 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1228368 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1228368' 00:14:27.229 killing process with pid 1228368 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1228368 00:14:27.229 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1228368 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.488 23:58:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.024 23:58:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.024 00:14:30.024 real 0m16.902s 00:14:30.024 user 0m55.049s 00:14:30.024 sys 0m3.559s 00:14:30.024 23:58:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.024 23:58:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.024 ************************************ 00:14:30.024 END TEST nvmf_ns_masking 00:14:30.024 ************************************ 00:14:30.024 23:58:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:30.024 23:58:03 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.024 23:58:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:30.024 23:58:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:30.024 23:58:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.024 ************************************ 00:14:30.024 START TEST nvmf_nvme_cli 00:14:30.024 ************************************ 00:14:30.024 23:58:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.024 * Looking for test storage... 00:14:30.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.024 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.025 23:58:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:31.405 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:31.405 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:31.405 Found net devices under 0000:08:00.0: cvl_0_0 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:31.405 Found net devices under 0000:08:00.1: cvl_0_1 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.405 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:14:31.406 00:14:31.406 --- 10.0.0.2 ping statistics --- 00:14:31.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.406 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:31.406 00:14:31.406 --- 10.0.0.1 ping statistics --- 00:14:31.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.406 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1231055 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1231055 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1231055 ']' 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:31.406 23:58:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.406 [2024-07-15 23:58:05.801809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:31.406 [2024-07-15 23:58:05.801900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.406 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.406 [2024-07-15 23:58:05.869013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.665 [2024-07-15 23:58:05.960926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.665 [2024-07-15 23:58:05.960985] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.665 [2024-07-15 23:58:05.961001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.665 [2024-07-15 23:58:05.961014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.665 [2024-07-15 23:58:05.961026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.665 [2024-07-15 23:58:05.961092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.665 [2024-07-15 23:58:05.961163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.665 [2024-07-15 23:58:05.963161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.665 [2024-07-15 23:58:05.963196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 [2024-07-15 23:58:06.113780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 Malloc0 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 Malloc1 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.665 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.923 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.923 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.923 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.923 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.923 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.924 [2024-07-15 23:58:06.193398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:14:31.924 00:14:31.924 Discovery Log Number of Records 2, Generation counter 2 00:14:31.924 =====Discovery Log Entry 0====== 00:14:31.924 trtype: tcp 00:14:31.924 adrfam: ipv4 00:14:31.924 subtype: current discovery subsystem 00:14:31.924 treq: not required 00:14:31.924 portid: 0 00:14:31.924 trsvcid: 4420 00:14:31.924 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:31.924 traddr: 10.0.0.2 00:14:31.924 eflags: explicit discovery connections, duplicate discovery information 00:14:31.924 sectype: none 00:14:31.924 =====Discovery Log Entry 1====== 00:14:31.924 trtype: tcp 00:14:31.924 adrfam: ipv4 00:14:31.924 subtype: nvme subsystem 00:14:31.924 treq: not required 00:14:31.924 portid: 0 00:14:31.924 trsvcid: 4420 00:14:31.924 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:31.924 traddr: 10.0.0.2 00:14:31.924 eflags: none 00:14:31.924 sectype: none 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:31.924 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:32.491 23:58:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:34.390 /dev/nvme0n1 ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:34.390 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:34.391 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:34.649 23:58:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.650 rmmod nvme_tcp 00:14:34.650 rmmod nvme_fabrics 00:14:34.650 rmmod nvme_keyring 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1231055 ']' 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1231055 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1231055 ']' 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1231055 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:34.650 23:58:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1231055 00:14:34.650 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:34.650 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:34.650 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1231055' 00:14:34.650 killing process with pid 1231055 00:14:34.650 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1231055 00:14:34.650 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1231055 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.910 23:58:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.812 23:58:11 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:36.812 00:14:36.812 real 0m7.301s 00:14:36.812 user 0m13.351s 00:14:36.812 sys 0m1.854s 00:14:36.812 23:58:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.812 23:58:11 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.812 ************************************ 00:14:36.812 END TEST nvmf_nvme_cli 00:14:36.812 ************************************ 00:14:36.812 23:58:11 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:36.812 23:58:11 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:36.812 23:58:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:36.812 23:58:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:36.812 23:58:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.812 ************************************ 00:14:36.812 START TEST nvmf_vfio_user 00:14:36.812 ************************************ 00:14:36.812 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:37.071 * Looking for test storage... 00:14:37.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.071 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.071 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1231697 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1231697' 00:14:37.072 Process pid: 1231697 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1231697 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1231697 ']' 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.072 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:37.072 [2024-07-15 23:58:11.422556] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:37.072 [2024-07-15 23:58:11.422661] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.072 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.072 [2024-07-15 23:58:11.482389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:37.072 [2024-07-15 23:58:11.570249] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.072 [2024-07-15 23:58:11.570303] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.072 [2024-07-15 23:58:11.570319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.072 [2024-07-15 23:58:11.570333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.072 [2024-07-15 23:58:11.570345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.072 [2024-07-15 23:58:11.570428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.072 [2024-07-15 23:58:11.570508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.072 [2024-07-15 23:58:11.570589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.072 [2024-07-15 23:58:11.570593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.330 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.330 23:58:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:37.330 23:58:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:38.263 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:38.521 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:38.521 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:38.521 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.521 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:38.521 23:58:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.779 Malloc1 00:14:39.035 23:58:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:39.035 23:58:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:39.293 23:58:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:39.550 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:39.550 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:39.550 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:39.807 Malloc2 00:14:39.807 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:40.065 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:40.323 23:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:40.581 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:40.581 [2024-07-15 23:58:15.061952] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:40.581 [2024-07-15 23:58:15.061997] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232066 ] 00:14:40.581 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.841 [2024-07-15 23:58:15.103163] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:40.841 [2024-07-15 23:58:15.110591] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.841 [2024-07-15 23:58:15.110622] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb52818d000 00:14:40.841 [2024-07-15 23:58:15.111588] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.112581] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.113582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.114587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.115592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.116605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.117600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.118603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.841 [2024-07-15 23:58:15.119612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.841 [2024-07-15 23:58:15.119634] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb526f43000 00:14:40.841 [2024-07-15 23:58:15.121085] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.841 [2024-07-15 23:58:15.141021] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:40.841 [2024-07-15 23:58:15.141062] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:40.841 [2024-07-15 23:58:15.145781] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:40.841 [2024-07-15 23:58:15.145841] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:40.841 [2024-07-15 23:58:15.145943] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:40.841 [2024-07-15 23:58:15.145973] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:40.841 [2024-07-15 23:58:15.145985] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:40.841 [2024-07-15 23:58:15.146769] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:40.841 [2024-07-15 23:58:15.146793] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:40.841 [2024-07-15 23:58:15.146807] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:40.841 [2024-07-15 23:58:15.147779] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:40.841 [2024-07-15 23:58:15.147799] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:40.841 [2024-07-15 23:58:15.147814] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.148808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:40.841 [2024-07-15 23:58:15.148828] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.149802] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:40.841 [2024-07-15 23:58:15.149831] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:40.841 [2024-07-15 23:58:15.149841] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.149854] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.149966] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:40.841 [2024-07-15 23:58:15.149976] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.149986] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:40.841 [2024-07-15 23:58:15.150813] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:40.841 [2024-07-15 23:58:15.151803] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:40.841 [2024-07-15 23:58:15.152813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:40.841 [2024-07-15 23:58:15.153803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:40.841 [2024-07-15 23:58:15.153929] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:40.841 [2024-07-15 23:58:15.154817] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:40.841 [2024-07-15 23:58:15.154835] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:40.841 [2024-07-15 23:58:15.154845] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:40.841 [2024-07-15 23:58:15.154873] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:40.841 [2024-07-15 23:58:15.154888] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:40.841 [2024-07-15 23:58:15.154917] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.841 [2024-07-15 23:58:15.154928] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.841 [2024-07-15 23:58:15.154949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.841 [2024-07-15 23:58:15.155024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:40.841 [2024-07-15 23:58:15.155048] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:40.841 [2024-07-15 23:58:15.155059] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:40.841 [2024-07-15 23:58:15.155068] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:40.841 [2024-07-15 23:58:15.155077] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:40.841 [2024-07-15 23:58:15.155086] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:40.841 [2024-07-15 23:58:15.155095] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:40.841 [2024-07-15 23:58:15.155104] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:40.841 [2024-07-15 23:58:15.155118] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:40.841 [2024-07-15 23:58:15.155134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:40.841 [2024-07-15 23:58:15.155160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:40.841 [2024-07-15 23:58:15.155179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.841 [2024-07-15 23:58:15.155194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.842 [2024-07-15 23:58:15.155208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.842 [2024-07-15 23:58:15.155222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.842 [2024-07-15 23:58:15.155237] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155254] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155297] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:40.842 [2024-07-15 23:58:15.155306] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155318] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155334] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155437] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155455] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155469] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:40.842 [2024-07-15 23:58:15.155478] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:40.842 [2024-07-15 23:58:15.155489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155523] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:40.842 [2024-07-15 23:58:15.155544] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155560] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155574] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.842 [2024-07-15 23:58:15.155583] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.842 [2024-07-15 23:58:15.155594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155641] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155657] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155675] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.842 [2024-07-15 23:58:15.155691] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.842 [2024-07-15 23:58:15.155703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155738] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155751] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155766] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155778] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155788] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155798] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:40.842 [2024-07-15 23:58:15.155807] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:40.842 [2024-07-15 23:58:15.155817] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:40.842 [2024-07-15 23:58:15.155848] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.155968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.155988] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:40.842 [2024-07-15 23:58:15.155998] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:40.842 [2024-07-15 23:58:15.156006] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:40.842 [2024-07-15 23:58:15.156013] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:40.842 [2024-07-15 23:58:15.156024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:40.842 [2024-07-15 23:58:15.156037] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:40.842 [2024-07-15 23:58:15.156046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:40.842 [2024-07-15 23:58:15.156061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.156074] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:40.842 [2024-07-15 23:58:15.156083] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.842 [2024-07-15 23:58:15.156094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.156108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:40.842 [2024-07-15 23:58:15.156117] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:40.842 [2024-07-15 23:58:15.156127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:40.842 [2024-07-15 23:58:15.156147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.156171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.156189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:40.842 [2024-07-15 23:58:15.156208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:40.842 ===================================================== 00:14:40.842 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:40.842 ===================================================== 00:14:40.842 Controller Capabilities/Features 00:14:40.842 ================================ 00:14:40.842 Vendor ID: 4e58 00:14:40.842 Subsystem Vendor ID: 4e58 00:14:40.842 Serial Number: SPDK1 00:14:40.842 Model Number: SPDK bdev Controller 00:14:40.842 Firmware Version: 24.05.1 00:14:40.842 Recommended Arb Burst: 6 00:14:40.842 IEEE OUI Identifier: 8d 6b 50 00:14:40.842 Multi-path I/O 00:14:40.842 May have multiple subsystem ports: Yes 00:14:40.842 May have multiple controllers: Yes 00:14:40.842 Associated with SR-IOV VF: No 00:14:40.842 Max Data Transfer Size: 131072 00:14:40.842 Max Number of Namespaces: 32 00:14:40.842 Max Number of I/O Queues: 127 00:14:40.842 NVMe Specification Version (VS): 1.3 00:14:40.842 NVMe Specification Version (Identify): 1.3 00:14:40.842 Maximum Queue Entries: 256 00:14:40.842 Contiguous Queues Required: Yes 00:14:40.842 Arbitration Mechanisms Supported 00:14:40.842 Weighted Round Robin: Not Supported 00:14:40.842 Vendor Specific: Not Supported 00:14:40.842 Reset Timeout: 15000 ms 00:14:40.842 Doorbell Stride: 4 bytes 00:14:40.842 NVM Subsystem Reset: Not Supported 00:14:40.842 Command Sets Supported 00:14:40.842 NVM Command Set: Supported 00:14:40.842 Boot Partition: Not Supported 00:14:40.842 Memory Page Size Minimum: 4096 bytes 00:14:40.842 Memory Page Size Maximum: 4096 bytes 00:14:40.842 Persistent Memory Region: Not Supported 00:14:40.842 Optional Asynchronous Events Supported 00:14:40.842 Namespace Attribute Notices: Supported 00:14:40.842 Firmware Activation Notices: Not Supported 00:14:40.842 ANA Change Notices: Not Supported 00:14:40.842 PLE Aggregate Log Change Notices: Not Supported 00:14:40.842 LBA Status Info Alert Notices: Not Supported 00:14:40.842 EGE Aggregate Log Change Notices: Not Supported 00:14:40.842 Normal NVM Subsystem Shutdown event: Not Supported 00:14:40.842 Zone Descriptor Change Notices: Not Supported 00:14:40.842 Discovery Log Change Notices: Not Supported 00:14:40.842 Controller Attributes 00:14:40.842 128-bit Host Identifier: Supported 00:14:40.842 Non-Operational Permissive Mode: Not Supported 00:14:40.842 NVM Sets: Not Supported 00:14:40.842 Read Recovery Levels: Not Supported 00:14:40.842 Endurance Groups: Not Supported 00:14:40.843 Predictable Latency Mode: Not Supported 00:14:40.843 Traffic Based Keep ALive: Not Supported 00:14:40.843 Namespace Granularity: Not Supported 00:14:40.843 SQ Associations: Not Supported 00:14:40.843 UUID List: Not Supported 00:14:40.843 Multi-Domain Subsystem: Not Supported 00:14:40.843 Fixed Capacity Management: Not Supported 00:14:40.843 Variable Capacity Management: Not Supported 00:14:40.843 Delete Endurance Group: Not Supported 00:14:40.843 Delete NVM Set: Not Supported 00:14:40.843 Extended LBA Formats Supported: Not Supported 00:14:40.843 Flexible Data Placement Supported: Not Supported 00:14:40.843 00:14:40.843 Controller Memory Buffer Support 00:14:40.843 ================================ 00:14:40.843 Supported: No 00:14:40.843 00:14:40.843 Persistent Memory Region Support 00:14:40.843 ================================ 00:14:40.843 Supported: No 00:14:40.843 00:14:40.843 Admin Command Set Attributes 00:14:40.843 ============================ 00:14:40.843 Security Send/Receive: Not Supported 00:14:40.843 Format NVM: Not Supported 00:14:40.843 Firmware Activate/Download: Not Supported 00:14:40.843 Namespace Management: Not Supported 00:14:40.843 Device Self-Test: Not Supported 00:14:40.843 Directives: Not Supported 00:14:40.843 NVMe-MI: Not Supported 00:14:40.843 Virtualization Management: Not Supported 00:14:40.843 Doorbell Buffer Config: Not Supported 00:14:40.843 Get LBA Status Capability: Not Supported 00:14:40.843 Command & Feature Lockdown Capability: Not Supported 00:14:40.843 Abort Command Limit: 4 00:14:40.843 Async Event Request Limit: 4 00:14:40.843 Number of Firmware Slots: N/A 00:14:40.843 Firmware Slot 1 Read-Only: N/A 00:14:40.843 Firmware Activation Without Reset: N/A 00:14:40.843 Multiple Update Detection Support: N/A 00:14:40.843 Firmware Update Granularity: No Information Provided 00:14:40.843 Per-Namespace SMART Log: No 00:14:40.843 Asymmetric Namespace Access Log Page: Not Supported 00:14:40.843 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:40.843 Command Effects Log Page: Supported 00:14:40.843 Get Log Page Extended Data: Supported 00:14:40.843 Telemetry Log Pages: Not Supported 00:14:40.843 Persistent Event Log Pages: Not Supported 00:14:40.843 Supported Log Pages Log Page: May Support 00:14:40.843 Commands Supported & Effects Log Page: Not Supported 00:14:40.843 Feature Identifiers & Effects Log Page:May Support 00:14:40.843 NVMe-MI Commands & Effects Log Page: May Support 00:14:40.843 Data Area 4 for Telemetry Log: Not Supported 00:14:40.843 Error Log Page Entries Supported: 128 00:14:40.843 Keep Alive: Supported 00:14:40.843 Keep Alive Granularity: 10000 ms 00:14:40.843 00:14:40.843 NVM Command Set Attributes 00:14:40.843 ========================== 00:14:40.843 Submission Queue Entry Size 00:14:40.843 Max: 64 00:14:40.843 Min: 64 00:14:40.843 Completion Queue Entry Size 00:14:40.843 Max: 16 00:14:40.843 Min: 16 00:14:40.843 Number of Namespaces: 32 00:14:40.843 Compare Command: Supported 00:14:40.843 Write Uncorrectable Command: Not Supported 00:14:40.843 Dataset Management Command: Supported 00:14:40.843 Write Zeroes Command: Supported 00:14:40.843 Set Features Save Field: Not Supported 00:14:40.843 Reservations: Not Supported 00:14:40.843 Timestamp: Not Supported 00:14:40.843 Copy: Supported 00:14:40.843 Volatile Write Cache: Present 00:14:40.843 Atomic Write Unit (Normal): 1 00:14:40.843 Atomic Write Unit (PFail): 1 00:14:40.843 Atomic Compare & Write Unit: 1 00:14:40.843 Fused Compare & Write: Supported 00:14:40.843 Scatter-Gather List 00:14:40.843 SGL Command Set: Supported (Dword aligned) 00:14:40.843 SGL Keyed: Not Supported 00:14:40.843 SGL Bit Bucket Descriptor: Not Supported 00:14:40.843 SGL Metadata Pointer: Not Supported 00:14:40.843 Oversized SGL: Not Supported 00:14:40.843 SGL Metadata Address: Not Supported 00:14:40.843 SGL Offset: Not Supported 00:14:40.843 Transport SGL Data Block: Not Supported 00:14:40.843 Replay Protected Memory Block: Not Supported 00:14:40.843 00:14:40.843 Firmware Slot Information 00:14:40.843 ========================= 00:14:40.843 Active slot: 1 00:14:40.843 Slot 1 Firmware Revision: 24.05.1 00:14:40.843 00:14:40.843 00:14:40.843 Commands Supported and Effects 00:14:40.843 ============================== 00:14:40.843 Admin Commands 00:14:40.843 -------------- 00:14:40.843 Get Log Page (02h): Supported 00:14:40.843 Identify (06h): Supported 00:14:40.843 Abort (08h): Supported 00:14:40.843 Set Features (09h): Supported 00:14:40.843 Get Features (0Ah): Supported 00:14:40.843 Asynchronous Event Request (0Ch): Supported 00:14:40.843 Keep Alive (18h): Supported 00:14:40.843 I/O Commands 00:14:40.843 ------------ 00:14:40.843 Flush (00h): Supported LBA-Change 00:14:40.843 Write (01h): Supported LBA-Change 00:14:40.843 Read (02h): Supported 00:14:40.843 Compare (05h): Supported 00:14:40.843 Write Zeroes (08h): Supported LBA-Change 00:14:40.843 Dataset Management (09h): Supported LBA-Change 00:14:40.843 Copy (19h): Supported LBA-Change 00:14:40.843 Unknown (79h): Supported LBA-Change 00:14:40.843 Unknown (7Ah): Supported 00:14:40.843 00:14:40.843 Error Log 00:14:40.843 ========= 00:14:40.843 00:14:40.843 Arbitration 00:14:40.843 =========== 00:14:40.843 Arbitration Burst: 1 00:14:40.843 00:14:40.843 Power Management 00:14:40.843 ================ 00:14:40.843 Number of Power States: 1 00:14:40.843 Current Power State: Power State #0 00:14:40.843 Power State #0: 00:14:40.843 Max Power: 0.00 W 00:14:40.843 Non-Operational State: Operational 00:14:40.843 Entry Latency: Not Reported 00:14:40.843 Exit Latency: Not Reported 00:14:40.843 Relative Read Throughput: 0 00:14:40.843 Relative Read Latency: 0 00:14:40.843 Relative Write Throughput: 0 00:14:40.843 Relative Write Latency: 0 00:14:40.843 Idle Power: Not Reported 00:14:40.843 Active Power: Not Reported 00:14:40.843 Non-Operational Permissive Mode: Not Supported 00:14:40.843 00:14:40.843 Health Information 00:14:40.843 ================== 00:14:40.843 Critical Warnings: 00:14:40.843 Available Spare Space: OK 00:14:40.843 Temperature: OK 00:14:40.843 Device Reliability: OK 00:14:40.843 Read Only: No 00:14:40.843 Volatile Memory Backup: OK 00:14:40.843 Current Temperature: 0 Kelvin[2024-07-15 23:58:15.156355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:40.843 [2024-07-15 23:58:15.156373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:40.843 [2024-07-15 23:58:15.156412] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:40.843 [2024-07-15 23:58:15.156431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.843 [2024-07-15 23:58:15.156443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.843 [2024-07-15 23:58:15.156455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.843 [2024-07-15 23:58:15.156466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.843 [2024-07-15 23:58:15.160160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:40.843 [2024-07-15 23:58:15.160182] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:40.843 [2024-07-15 23:58:15.160846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:40.843 [2024-07-15 23:58:15.160923] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:40.843 [2024-07-15 23:58:15.160937] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:40.843 [2024-07-15 23:58:15.161847] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:40.843 [2024-07-15 23:58:15.161871] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:40.843 [2024-07-15 23:58:15.161949] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:40.843 [2024-07-15 23:58:15.163891] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.843 (-273 Celsius) 00:14:40.843 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:40.843 Available Spare: 0% 00:14:40.843 Available Spare Threshold: 0% 00:14:40.843 Life Percentage Used: 0% 00:14:40.843 Data Units Read: 0 00:14:40.843 Data Units Written: 0 00:14:40.843 Host Read Commands: 0 00:14:40.843 Host Write Commands: 0 00:14:40.843 Controller Busy Time: 0 minutes 00:14:40.843 Power Cycles: 0 00:14:40.843 Power On Hours: 0 hours 00:14:40.843 Unsafe Shutdowns: 0 00:14:40.843 Unrecoverable Media Errors: 0 00:14:40.843 Lifetime Error Log Entries: 0 00:14:40.843 Warning Temperature Time: 0 minutes 00:14:40.843 Critical Temperature Time: 0 minutes 00:14:40.843 00:14:40.843 Number of Queues 00:14:40.843 ================ 00:14:40.843 Number of I/O Submission Queues: 127 00:14:40.843 Number of I/O Completion Queues: 127 00:14:40.843 00:14:40.843 Active Namespaces 00:14:40.843 ================= 00:14:40.843 Namespace ID:1 00:14:40.843 Error Recovery Timeout: Unlimited 00:14:40.843 Command Set Identifier: NVM (00h) 00:14:40.843 Deallocate: Supported 00:14:40.843 Deallocated/Unwritten Error: Not Supported 00:14:40.843 Deallocated Read Value: Unknown 00:14:40.844 Deallocate in Write Zeroes: Not Supported 00:14:40.844 Deallocated Guard Field: 0xFFFF 00:14:40.844 Flush: Supported 00:14:40.844 Reservation: Supported 00:14:40.844 Namespace Sharing Capabilities: Multiple Controllers 00:14:40.844 Size (in LBAs): 131072 (0GiB) 00:14:40.844 Capacity (in LBAs): 131072 (0GiB) 00:14:40.844 Utilization (in LBAs): 131072 (0GiB) 00:14:40.844 NGUID: D852A24E80274F29BE2EC34AC06416A8 00:14:40.844 UUID: d852a24e-8027-4f29-be2e-c34ac06416a8 00:14:40.844 Thin Provisioning: Not Supported 00:14:40.844 Per-NS Atomic Units: Yes 00:14:40.844 Atomic Boundary Size (Normal): 0 00:14:40.844 Atomic Boundary Size (PFail): 0 00:14:40.844 Atomic Boundary Offset: 0 00:14:40.844 Maximum Single Source Range Length: 65535 00:14:40.844 Maximum Copy Length: 65535 00:14:40.844 Maximum Source Range Count: 1 00:14:40.844 NGUID/EUI64 Never Reused: No 00:14:40.844 Namespace Write Protected: No 00:14:40.844 Number of LBA Formats: 1 00:14:40.844 Current LBA Format: LBA Format #00 00:14:40.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:40.844 00:14:40.844 23:58:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:40.844 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.102 [2024-07-15 23:58:15.386972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.415 Initializing NVMe Controllers 00:14:46.415 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.415 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:46.415 Initialization complete. Launching workers. 00:14:46.415 ======================================================== 00:14:46.415 Latency(us) 00:14:46.415 Device Information : IOPS MiB/s Average min max 00:14:46.415 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24090.20 94.10 5317.72 1455.88 10543.47 00:14:46.415 ======================================================== 00:14:46.415 Total : 24090.20 94.10 5317.72 1455.88 10543.47 00:14:46.415 00:14:46.415 [2024-07-15 23:58:20.409526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.415 23:58:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:46.415 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.415 [2024-07-15 23:58:20.632688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:51.699 Initializing NVMe Controllers 00:14:51.699 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:51.699 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:51.699 Initialization complete. Launching workers. 00:14:51.699 ======================================================== 00:14:51.699 Latency(us) 00:14:51.699 Device Information : IOPS MiB/s Average min max 00:14:51.699 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.47 62.70 7979.89 7661.55 8100.26 00:14:51.700 ======================================================== 00:14:51.700 Total : 16050.47 62.70 7979.89 7661.55 8100.26 00:14:51.700 00:14:51.700 [2024-07-15 23:58:25.675054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:51.700 23:58:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:51.700 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.700 [2024-07-15 23:58:25.888176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:56.963 [2024-07-15 23:58:30.965453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:56.963 Initializing NVMe Controllers 00:14:56.963 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.963 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:56.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:56.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:56.963 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:56.963 Initialization complete. Launching workers. 00:14:56.963 Starting thread on core 2 00:14:56.963 Starting thread on core 3 00:14:56.963 Starting thread on core 1 00:14:56.963 23:58:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:56.963 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.963 [2024-07-15 23:58:31.249672] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.249 [2024-07-15 23:58:34.324206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.249 Initializing NVMe Controllers 00:15:00.249 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.249 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.249 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:00.249 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:00.249 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:00.249 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:00.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:00.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:00.249 Initialization complete. Launching workers. 00:15:00.249 Starting thread on core 1 with urgent priority queue 00:15:00.249 Starting thread on core 2 with urgent priority queue 00:15:00.249 Starting thread on core 3 with urgent priority queue 00:15:00.249 Starting thread on core 0 with urgent priority queue 00:15:00.249 SPDK bdev Controller (SPDK1 ) core 0: 6450.67 IO/s 15.50 secs/100000 ios 00:15:00.249 SPDK bdev Controller (SPDK1 ) core 1: 7019.00 IO/s 14.25 secs/100000 ios 00:15:00.249 SPDK bdev Controller (SPDK1 ) core 2: 7880.33 IO/s 12.69 secs/100000 ios 00:15:00.249 SPDK bdev Controller (SPDK1 ) core 3: 6242.33 IO/s 16.02 secs/100000 ios 00:15:00.249 ======================================================== 00:15:00.249 00:15:00.249 23:58:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:00.249 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.249 [2024-07-15 23:58:34.601712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.249 Initializing NVMe Controllers 00:15:00.249 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.249 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.249 Namespace ID: 1 size: 0GB 00:15:00.249 Initialization complete. 00:15:00.249 INFO: using host memory buffer for IO 00:15:00.249 Hello world! 00:15:00.249 [2024-07-15 23:58:34.635370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.249 23:58:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:00.249 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.508 [2024-07-15 23:58:34.909687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:01.444 Initializing NVMe Controllers 00:15:01.444 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.444 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.444 Initialization complete. Launching workers. 00:15:01.444 submit (in ns) avg, min, max = 9441.3, 4453.3, 4013976.3 00:15:01.444 complete (in ns) avg, min, max = 28312.5, 2648.9, 4999952.6 00:15:01.444 00:15:01.444 Submit histogram 00:15:01.444 ================ 00:15:01.444 Range in us Cumulative Count 00:15:01.444 4.433 - 4.456: 0.0084% ( 1) 00:15:01.444 4.456 - 4.480: 0.0421% ( 4) 00:15:01.444 4.480 - 4.504: 0.7496% ( 84) 00:15:01.444 4.504 - 4.527: 2.7794% ( 241) 00:15:01.444 4.527 - 4.551: 6.4600% ( 437) 00:15:01.444 4.551 - 4.575: 11.1092% ( 552) 00:15:01.444 4.575 - 4.599: 14.4361% ( 395) 00:15:01.444 4.599 - 4.622: 16.5165% ( 247) 00:15:01.444 4.622 - 4.646: 17.2576% ( 88) 00:15:01.444 4.646 - 4.670: 17.8893% ( 75) 00:15:01.444 4.670 - 4.693: 19.3296% ( 171) 00:15:01.444 4.693 - 4.717: 22.7407% ( 405) 00:15:01.444 4.717 - 4.741: 30.3799% ( 907) 00:15:01.444 4.741 - 4.764: 37.8253% ( 884) 00:15:01.444 4.764 - 4.788: 41.0595% ( 384) 00:15:01.444 4.788 - 4.812: 42.8956% ( 218) 00:15:01.444 4.812 - 4.836: 43.8305% ( 111) 00:15:01.444 4.836 - 4.859: 44.5549% ( 86) 00:15:01.444 4.859 - 4.883: 45.5571% ( 119) 00:15:01.444 4.883 - 4.907: 46.8795% ( 157) 00:15:01.444 4.907 - 4.930: 49.4399% ( 304) 00:15:01.444 4.930 - 4.954: 50.9981% ( 185) 00:15:01.444 4.954 - 4.978: 52.5731% ( 187) 00:15:01.444 4.978 - 5.001: 53.4069% ( 99) 00:15:01.444 5.001 - 5.025: 53.9459% ( 64) 00:15:01.444 5.025 - 5.049: 54.2239% ( 33) 00:15:01.444 5.049 - 5.073: 54.3249% ( 12) 00:15:01.444 5.073 - 5.096: 54.4260% ( 12) 00:15:01.444 5.096 - 5.120: 55.1756% ( 89) 00:15:01.444 5.120 - 5.144: 57.1380% ( 233) 00:15:01.444 5.144 - 5.167: 64.0360% ( 819) 00:15:01.444 5.167 - 5.191: 66.7144% ( 318) 00:15:01.444 5.191 - 5.215: 68.2641% ( 184) 00:15:01.444 5.215 - 5.239: 69.3254% ( 126) 00:15:01.444 5.239 - 5.262: 69.9570% ( 75) 00:15:01.444 5.262 - 5.286: 70.6561% ( 83) 00:15:01.444 5.286 - 5.310: 75.2716% ( 548) 00:15:01.444 5.310 - 5.333: 77.9921% ( 323) 00:15:01.444 5.333 - 5.357: 79.4997% ( 179) 00:15:01.444 5.357 - 5.381: 80.1735% ( 80) 00:15:01.444 5.381 - 5.404: 81.9843% ( 215) 00:15:01.444 5.404 - 5.428: 82.4560% ( 56) 00:15:01.444 5.428 - 5.452: 82.8097% ( 42) 00:15:01.444 5.452 - 5.476: 82.9361% ( 15) 00:15:01.444 5.476 - 5.499: 83.0035% ( 8) 00:15:01.444 5.499 - 5.523: 83.2645% ( 31) 00:15:01.444 5.523 - 5.547: 90.4742% ( 856) 00:15:01.444 5.547 - 5.570: 93.5315% ( 363) 00:15:01.444 5.570 - 5.594: 95.5950% ( 245) 00:15:01.444 5.594 - 5.618: 96.2183% ( 74) 00:15:01.444 5.618 - 5.641: 96.5468% ( 39) 00:15:01.444 5.641 - 5.665: 96.7405% ( 23) 00:15:01.444 5.665 - 5.689: 96.8753% ( 16) 00:15:01.444 5.689 - 5.713: 97.0100% ( 16) 00:15:01.444 5.713 - 5.736: 97.0353% ( 3) 00:15:01.444 5.736 - 5.760: 97.0690% ( 4) 00:15:01.444 5.760 - 5.784: 97.2374% ( 20) 00:15:01.444 5.784 - 5.807: 97.3385% ( 12) 00:15:01.444 5.807 - 5.831: 97.5659% ( 27) 00:15:01.444 5.831 - 5.855: 97.6164% ( 6) 00:15:01.444 5.855 - 5.879: 97.6838% ( 8) 00:15:01.444 5.879 - 5.902: 97.7765% ( 11) 00:15:01.444 5.902 - 5.926: 97.8607% ( 10) 00:15:01.444 5.926 - 5.950: 97.9196% ( 7) 00:15:01.444 5.950 - 5.973: 97.9281% ( 1) 00:15:01.444 5.973 - 5.997: 98.0039% ( 9) 00:15:01.444 5.997 - 6.021: 98.0291% ( 3) 00:15:01.444 6.021 - 6.044: 98.0460% ( 2) 00:15:01.444 6.044 - 6.068: 98.0544% ( 1) 00:15:01.444 6.068 - 6.116: 98.1471% ( 11) 00:15:01.444 6.116 - 6.163: 98.1892% ( 5) 00:15:01.444 6.163 - 6.210: 98.2397% ( 6) 00:15:01.444 6.210 - 6.258: 98.3745% ( 16) 00:15:01.444 6.258 - 6.305: 98.4334% ( 7) 00:15:01.444 6.305 - 6.353: 98.4755% ( 5) 00:15:01.444 6.353 - 6.400: 98.4840% ( 1) 00:15:01.444 6.400 - 6.447: 98.5092% ( 3) 00:15:01.444 6.447 - 6.495: 98.6692% ( 19) 00:15:01.444 6.495 - 6.542: 98.6861% ( 2) 00:15:01.444 6.542 - 6.590: 98.6945% ( 1) 00:15:01.444 6.590 - 6.637: 98.7198% ( 3) 00:15:01.444 6.637 - 6.684: 98.7703% ( 6) 00:15:01.444 6.684 - 6.732: 98.7872% ( 2) 00:15:01.444 6.779 - 6.827: 98.8124% ( 3) 00:15:01.444 6.827 - 6.874: 99.0820% ( 32) 00:15:01.444 6.874 - 6.921: 99.1830% ( 12) 00:15:01.444 6.921 - 6.969: 99.2925% ( 13) 00:15:01.444 6.969 - 7.016: 99.3094% ( 2) 00:15:01.444 7.016 - 7.064: 99.3262% ( 2) 00:15:01.444 7.111 - 7.159: 99.3346% ( 1) 00:15:01.444 8.059 - 8.107: 99.3430% ( 1) 00:15:01.444 8.439 - 8.486: 99.3515% ( 1) 00:15:01.444 8.486 - 8.533: 99.3599% ( 1) 00:15:01.444 8.770 - 8.818: 99.3683% ( 1) 00:15:01.444 8.818 - 8.865: 99.3767% ( 1) 00:15:01.444 8.913 - 8.960: 99.3852% ( 1) 00:15:01.444 8.960 - 9.007: 99.3936% ( 1) 00:15:01.444 9.007 - 9.055: 99.4020% ( 1) 00:15:01.444 9.197 - 9.244: 99.4104% ( 1) 00:15:01.444 9.292 - 9.339: 99.4188% ( 1) 00:15:01.444 9.434 - 9.481: 99.4273% ( 1) 00:15:01.444 9.481 - 9.529: 99.4357% ( 1) 00:15:01.444 9.529 - 9.576: 99.4610% ( 3) 00:15:01.444 9.576 - 9.624: 99.4694% ( 1) 00:15:01.444 9.624 - 9.671: 99.4947% ( 3) 00:15:01.444 9.719 - 9.766: 99.5115% ( 2) 00:15:01.444 9.861 - 9.908: 99.5199% ( 1) 00:15:01.444 9.956 - 10.003: 99.5283% ( 1) 00:15:01.444 10.003 - 10.050: 99.5452% ( 2) 00:15:01.444 10.050 - 10.098: 99.5620% ( 2) 00:15:01.444 10.287 - 10.335: 99.5705% ( 1) 00:15:01.444 10.382 - 10.430: 99.5957% ( 3) 00:15:01.444 10.430 - 10.477: 99.6041% ( 1) 00:15:01.444 10.477 - 10.524: 99.6294% ( 3) 00:15:01.444 10.619 - 10.667: 99.6378% ( 1) 00:15:01.444 10.667 - 10.714: 99.6463% ( 1) 00:15:01.444 10.856 - 10.904: 99.6631% ( 2) 00:15:01.444 10.904 - 10.951: 99.6799% ( 2) 00:15:01.444 11.046 - 11.093: 99.6884% ( 1) 00:15:01.444 11.141 - 11.188: 99.6968% ( 1) 00:15:01.444 11.283 - 11.330: 99.7052% ( 1) 00:15:01.444 11.378 - 11.425: 99.7136% ( 1) 00:15:01.444 11.473 - 11.520: 99.7221% ( 1) 00:15:01.444 11.520 - 11.567: 99.7305% ( 1) 00:15:01.444 12.326 - 12.421: 99.7389% ( 1) 00:15:01.444 13.179 - 13.274: 99.7473% ( 1) 00:15:01.444 13.464 - 13.559: 99.7557% ( 1) 00:15:01.444 13.559 - 13.653: 99.7726% ( 2) 00:15:01.444 13.653 - 13.748: 99.8147% ( 5) 00:15:01.444 13.748 - 13.843: 99.8568% ( 5) 00:15:01.444 14.033 - 14.127: 99.8652% ( 1) 00:15:01.444 14.222 - 14.317: 99.8737% ( 1) 00:15:01.444 14.696 - 14.791: 99.8821% ( 1) 00:15:01.444 16.498 - 16.593: 99.8905% ( 1) 00:15:01.444 3980.705 - 4004.978: 99.9326% ( 5) 00:15:01.444 4004.978 - 4029.250: 100.0000% ( 8) 00:15:01.444 00:15:01.444 Complete histogram 00:15:01.444 ================== 00:15:01.444 Range in us Cumulative Count 00:15:01.444 2.643 - 2.655: 0.2779% ( 33) 00:15:01.444 2.655 - 2.667: 16.8702% ( 1970) 00:15:01.444 2.667 - 2.679: 61.6609% ( 5318) 00:15:01.444 2.679 - 2.690: 72.8881% ( 1333) 00:15:01.444 2.690 - 2.702: 78.0932% ( 618) 00:15:01.444 2.702 - 2.714: 85.7997% ( 915) 00:15:01.444 2.714 - 2.726: 91.4428% ( 670) 00:15:01.444 2.726 - 2.738: 96.0499% ( 547) 00:15:01.444 2.738 - 2.750: 97.3722% ( 157) 00:15:01.445 2.750 - 2.761: 97.7849% ( 49) 00:15:01.445 2.761 - 2.773: 98.1386% ( 42) 00:15:01.445 2.773 - 2.785: 98.3660% ( 27) 00:15:01.445 2.785 - 2.797: 98.5008% ( 16) 00:15:01.445 2.797 - 2.809: 98.6187% ( 14) 00:15:01.445 2.809 - 2.821: 98.6608% ( 5) 00:15:01.445 2.821 - 2.833: 98.6777% ( 2) 00:15:01.445 2.833 - 2.844: 98.7029% ( 3) 00:15:01.445 2.892 - 2.904: 98.7282% ( 3) 00:15:01.445 2.916 - 2.927: 98.7451% ( 2) 00:15:01.445 2.927 - 2.939: 98.7703% ( 3) 00:15:01.445 2.939 - 2.951: 98.8040% ( 4) 00:15:01.445 2.951 - 2.963: 98.8124% ( 1) 00:15:01.445 2.963 - 2.975: 98.8293% ( 2) 00:15:01.445 2.987 - 2.999: 98.8377% ( 1) 00:15:01.445 3.153 - 3.176: 98.8461% ( 1) 00:15:01.445 3.200 - 3.224: 98.8545% ( 1) 00:15:01.445 3.319 - 3.342: 98.8714% ( 2) 00:15:01.445 3.342 - 3.366: 98.8882% ( 2) 00:15:01.445 3.366 - 3.3[2024-07-15 23:58:35.932954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.702 90: 98.9135% ( 3) 00:15:01.702 3.390 - 3.413: 98.9472% ( 4) 00:15:01.702 3.413 - 3.437: 98.9893% ( 5) 00:15:01.702 3.437 - 3.461: 99.0146% ( 3) 00:15:01.702 3.461 - 3.484: 99.0483% ( 4) 00:15:01.702 3.484 - 3.508: 99.0651% ( 2) 00:15:01.702 3.508 - 3.532: 99.0735% ( 1) 00:15:01.702 3.532 - 3.556: 99.0904% ( 2) 00:15:01.702 3.579 - 3.603: 99.1072% ( 2) 00:15:01.702 3.650 - 3.674: 99.1156% ( 1) 00:15:01.702 4.267 - 4.290: 99.1241% ( 1) 00:15:01.702 5.713 - 5.736: 99.1325% ( 1) 00:15:01.702 6.068 - 6.116: 99.1409% ( 1) 00:15:01.702 6.163 - 6.210: 99.1493% ( 1) 00:15:01.702 6.258 - 6.305: 99.1578% ( 1) 00:15:01.702 6.353 - 6.400: 99.1662% ( 1) 00:15:01.702 6.684 - 6.732: 99.1746% ( 1) 00:15:01.702 7.111 - 7.159: 99.1914% ( 2) 00:15:01.702 7.159 - 7.206: 99.1999% ( 1) 00:15:01.702 7.348 - 7.396: 99.2083% ( 1) 00:15:01.702 7.443 - 7.490: 99.2167% ( 1) 00:15:01.702 7.538 - 7.585: 99.2336% ( 2) 00:15:01.702 7.585 - 7.633: 99.2420% ( 1) 00:15:01.702 7.633 - 7.680: 99.2588% ( 2) 00:15:01.702 7.775 - 7.822: 99.2672% ( 1) 00:15:01.702 8.012 - 8.059: 99.2757% ( 1) 00:15:01.702 8.107 - 8.154: 99.2841% ( 1) 00:15:01.702 8.391 - 8.439: 99.2925% ( 1) 00:15:01.702 8.486 - 8.533: 99.3009% ( 1) 00:15:01.702 8.818 - 8.865: 99.3094% ( 1) 00:15:01.702 8.960 - 9.007: 99.3178% ( 1) 00:15:01.702 9.956 - 10.003: 99.3262% ( 1) 00:15:01.702 10.524 - 10.572: 99.3346% ( 1) 00:15:01.702 10.951 - 10.999: 99.3430% ( 1) 00:15:01.702 11.567 - 11.615: 99.3515% ( 1) 00:15:01.702 20.575 - 20.670: 99.3599% ( 1) 00:15:01.702 3009.801 - 3021.938: 99.3683% ( 1) 00:15:01.702 3980.705 - 4004.978: 99.7473% ( 45) 00:15:01.702 4004.978 - 4029.250: 99.9916% ( 29) 00:15:01.702 4975.881 - 5000.154: 100.0000% ( 1) 00:15:01.702 00:15:01.702 23:58:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:01.702 23:58:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:01.702 23:58:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:01.702 23:58:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:01.702 23:58:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.961 [ 00:15:01.961 { 00:15:01.961 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.961 "subtype": "Discovery", 00:15:01.961 "listen_addresses": [], 00:15:01.961 "allow_any_host": true, 00:15:01.961 "hosts": [] 00:15:01.961 }, 00:15:01.961 { 00:15:01.961 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.961 "subtype": "NVMe", 00:15:01.961 "listen_addresses": [ 00:15:01.961 { 00:15:01.961 "trtype": "VFIOUSER", 00:15:01.961 "adrfam": "IPv4", 00:15:01.961 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.961 "trsvcid": "0" 00:15:01.961 } 00:15:01.961 ], 00:15:01.961 "allow_any_host": true, 00:15:01.961 "hosts": [], 00:15:01.961 "serial_number": "SPDK1", 00:15:01.961 "model_number": "SPDK bdev Controller", 00:15:01.961 "max_namespaces": 32, 00:15:01.961 "min_cntlid": 1, 00:15:01.961 "max_cntlid": 65519, 00:15:01.961 "namespaces": [ 00:15:01.961 { 00:15:01.961 "nsid": 1, 00:15:01.961 "bdev_name": "Malloc1", 00:15:01.961 "name": "Malloc1", 00:15:01.961 "nguid": "D852A24E80274F29BE2EC34AC06416A8", 00:15:01.961 "uuid": "d852a24e-8027-4f29-be2e-c34ac06416a8" 00:15:01.961 } 00:15:01.961 ] 00:15:01.961 }, 00:15:01.961 { 00:15:01.961 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.961 "subtype": "NVMe", 00:15:01.961 "listen_addresses": [ 00:15:01.961 { 00:15:01.961 "trtype": "VFIOUSER", 00:15:01.961 "adrfam": "IPv4", 00:15:01.961 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.961 "trsvcid": "0" 00:15:01.961 } 00:15:01.961 ], 00:15:01.961 "allow_any_host": true, 00:15:01.961 "hosts": [], 00:15:01.961 "serial_number": "SPDK2", 00:15:01.961 "model_number": "SPDK bdev Controller", 00:15:01.961 "max_namespaces": 32, 00:15:01.961 "min_cntlid": 1, 00:15:01.961 "max_cntlid": 65519, 00:15:01.961 "namespaces": [ 00:15:01.961 { 00:15:01.961 "nsid": 1, 00:15:01.961 "bdev_name": "Malloc2", 00:15:01.961 "name": "Malloc2", 00:15:01.961 "nguid": "15B97A8A62B64AEDAC67B92B48116007", 00:15:01.961 "uuid": "15b97a8a-62b6-4aed-ac67-b92b48116007" 00:15:01.961 } 00:15:01.961 ] 00:15:01.961 } 00:15:01.961 ] 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1233889 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:01.961 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:01.961 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.961 [2024-07-15 23:58:36.447681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.220 Malloc3 00:15:02.220 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:02.478 [2024-07-15 23:58:36.887052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.478 23:58:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:02.478 Asynchronous Event Request test 00:15:02.478 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.478 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:02.478 Registering asynchronous event callbacks... 00:15:02.478 Starting namespace attribute notice tests for all controllers... 00:15:02.478 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:02.478 aer_cb - Changed Namespace 00:15:02.478 Cleaning up... 00:15:02.737 [ 00:15:02.737 { 00:15:02.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:02.737 "subtype": "Discovery", 00:15:02.737 "listen_addresses": [], 00:15:02.737 "allow_any_host": true, 00:15:02.737 "hosts": [] 00:15:02.737 }, 00:15:02.737 { 00:15:02.737 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:02.737 "subtype": "NVMe", 00:15:02.737 "listen_addresses": [ 00:15:02.737 { 00:15:02.737 "trtype": "VFIOUSER", 00:15:02.737 "adrfam": "IPv4", 00:15:02.737 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:02.737 "trsvcid": "0" 00:15:02.737 } 00:15:02.737 ], 00:15:02.737 "allow_any_host": true, 00:15:02.737 "hosts": [], 00:15:02.737 "serial_number": "SPDK1", 00:15:02.737 "model_number": "SPDK bdev Controller", 00:15:02.737 "max_namespaces": 32, 00:15:02.737 "min_cntlid": 1, 00:15:02.737 "max_cntlid": 65519, 00:15:02.737 "namespaces": [ 00:15:02.737 { 00:15:02.737 "nsid": 1, 00:15:02.737 "bdev_name": "Malloc1", 00:15:02.737 "name": "Malloc1", 00:15:02.737 "nguid": "D852A24E80274F29BE2EC34AC06416A8", 00:15:02.737 "uuid": "d852a24e-8027-4f29-be2e-c34ac06416a8" 00:15:02.737 }, 00:15:02.737 { 00:15:02.737 "nsid": 2, 00:15:02.737 "bdev_name": "Malloc3", 00:15:02.737 "name": "Malloc3", 00:15:02.737 "nguid": "F8C355A0526244BD8B9CB72BCDB1AA3B", 00:15:02.737 "uuid": "f8c355a0-5262-44bd-8b9c-b72bcdb1aa3b" 00:15:02.737 } 00:15:02.737 ] 00:15:02.737 }, 00:15:02.737 { 00:15:02.737 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:02.737 "subtype": "NVMe", 00:15:02.737 "listen_addresses": [ 00:15:02.737 { 00:15:02.737 "trtype": "VFIOUSER", 00:15:02.737 "adrfam": "IPv4", 00:15:02.737 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:02.737 "trsvcid": "0" 00:15:02.737 } 00:15:02.737 ], 00:15:02.737 "allow_any_host": true, 00:15:02.737 "hosts": [], 00:15:02.737 "serial_number": "SPDK2", 00:15:02.737 "model_number": "SPDK bdev Controller", 00:15:02.737 "max_namespaces": 32, 00:15:02.737 "min_cntlid": 1, 00:15:02.737 "max_cntlid": 65519, 00:15:02.737 "namespaces": [ 00:15:02.737 { 00:15:02.737 "nsid": 1, 00:15:02.737 "bdev_name": "Malloc2", 00:15:02.737 "name": "Malloc2", 00:15:02.737 "nguid": "15B97A8A62B64AEDAC67B92B48116007", 00:15:02.737 "uuid": "15b97a8a-62b6-4aed-ac67-b92b48116007" 00:15:02.737 } 00:15:02.737 ] 00:15:02.737 } 00:15:02.737 ] 00:15:02.737 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1233889 00:15:02.737 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.737 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:02.737 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:02.737 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:02.737 [2024-07-15 23:58:37.211059] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:02.737 [2024-07-15 23:58:37.211112] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233982 ] 00:15:02.737 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.997 [2024-07-15 23:58:37.253107] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:02.997 [2024-07-15 23:58:37.255537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.997 [2024-07-15 23:58:37.255569] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f39bcbdc000 00:15:02.997 [2024-07-15 23:58:37.256533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.257537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.258553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.259560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.260562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.261567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.262575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.263582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.997 [2024-07-15 23:58:37.264603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.997 [2024-07-15 23:58:37.264626] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f39bb992000 00:15:02.997 [2024-07-15 23:58:37.266078] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.997 [2024-07-15 23:58:37.286102] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:02.997 [2024-07-15 23:58:37.286136] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:02.997 [2024-07-15 23:58:37.288247] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:02.997 [2024-07-15 23:58:37.288306] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:02.997 [2024-07-15 23:58:37.288409] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:02.997 [2024-07-15 23:58:37.288440] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:02.997 [2024-07-15 23:58:37.288452] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:02.997 [2024-07-15 23:58:37.289254] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:02.997 [2024-07-15 23:58:37.289284] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:02.997 [2024-07-15 23:58:37.289299] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:02.997 [2024-07-15 23:58:37.290255] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:02.997 [2024-07-15 23:58:37.290278] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:02.997 [2024-07-15 23:58:37.290293] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.291262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:02.997 [2024-07-15 23:58:37.291283] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.294156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:02.997 [2024-07-15 23:58:37.294185] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:02.997 [2024-07-15 23:58:37.294195] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.294209] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.294321] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:02.997 [2024-07-15 23:58:37.294330] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.294340] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:02.997 [2024-07-15 23:58:37.294423] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:02.997 [2024-07-15 23:58:37.295424] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:02.997 [2024-07-15 23:58:37.296430] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:02.997 [2024-07-15 23:58:37.297429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:02.997 [2024-07-15 23:58:37.297509] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.997 [2024-07-15 23:58:37.298445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:02.998 [2024-07-15 23:58:37.298466] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.998 [2024-07-15 23:58:37.298476] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.298505] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:02.998 [2024-07-15 23:58:37.298521] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.298551] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.998 [2024-07-15 23:58:37.298562] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.998 [2024-07-15 23:58:37.298582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.305166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.305196] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:02.998 [2024-07-15 23:58:37.305207] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:02.998 [2024-07-15 23:58:37.305217] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:02.998 [2024-07-15 23:58:37.305226] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:02.998 [2024-07-15 23:58:37.305235] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:02.998 [2024-07-15 23:58:37.305244] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:02.998 [2024-07-15 23:58:37.305253] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.305268] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.305285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.313151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.313178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.998 [2024-07-15 23:58:37.313197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.998 [2024-07-15 23:58:37.313212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.998 [2024-07-15 23:58:37.313227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.998 [2024-07-15 23:58:37.313237] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.313255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.313271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.321152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.321171] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:02.998 [2024-07-15 23:58:37.321182] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.321195] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.321212] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.321229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.329159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.329245] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.329263] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.329278] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:02.998 [2024-07-15 23:58:37.329288] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:02.998 [2024-07-15 23:58:37.329299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.337156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.337182] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:02.998 [2024-07-15 23:58:37.337205] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.337222] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.337236] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.998 [2024-07-15 23:58:37.337245] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.998 [2024-07-15 23:58:37.337257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.345190] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.345208] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.345223] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.998 [2024-07-15 23:58:37.345232] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.998 [2024-07-15 23:58:37.345243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.353183] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353197] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353213] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353226] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353236] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353246] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.998 [2024-07-15 23:58:37.353255] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:02.998 [2024-07-15 23:58:37.353265] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:02.998 [2024-07-15 23:58:37.353297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.361148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.361189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.369151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.369179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.377180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.385153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.385182] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:02.998 [2024-07-15 23:58:37.385192] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:02.998 [2024-07-15 23:58:37.385200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:02.998 [2024-07-15 23:58:37.385212] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:02.998 [2024-07-15 23:58:37.385224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:02.998 [2024-07-15 23:58:37.385238] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:02.998 [2024-07-15 23:58:37.385247] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:02.998 [2024-07-15 23:58:37.385258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.385270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:02.998 [2024-07-15 23:58:37.385280] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.998 [2024-07-15 23:58:37.385290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.385303] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:02.998 [2024-07-15 23:58:37.385312] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:02.998 [2024-07-15 23:58:37.385323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:02.998 [2024-07-15 23:58:37.393157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.393187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.393205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:02.998 [2024-07-15 23:58:37.393222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:02.998 ===================================================== 00:15:02.998 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:02.998 ===================================================== 00:15:02.998 Controller Capabilities/Features 00:15:02.998 ================================ 00:15:02.998 Vendor ID: 4e58 00:15:02.999 Subsystem Vendor ID: 4e58 00:15:02.999 Serial Number: SPDK2 00:15:02.999 Model Number: SPDK bdev Controller 00:15:02.999 Firmware Version: 24.05.1 00:15:02.999 Recommended Arb Burst: 6 00:15:02.999 IEEE OUI Identifier: 8d 6b 50 00:15:02.999 Multi-path I/O 00:15:02.999 May have multiple subsystem ports: Yes 00:15:02.999 May have multiple controllers: Yes 00:15:02.999 Associated with SR-IOV VF: No 00:15:02.999 Max Data Transfer Size: 131072 00:15:02.999 Max Number of Namespaces: 32 00:15:02.999 Max Number of I/O Queues: 127 00:15:02.999 NVMe Specification Version (VS): 1.3 00:15:02.999 NVMe Specification Version (Identify): 1.3 00:15:02.999 Maximum Queue Entries: 256 00:15:02.999 Contiguous Queues Required: Yes 00:15:02.999 Arbitration Mechanisms Supported 00:15:02.999 Weighted Round Robin: Not Supported 00:15:02.999 Vendor Specific: Not Supported 00:15:02.999 Reset Timeout: 15000 ms 00:15:02.999 Doorbell Stride: 4 bytes 00:15:02.999 NVM Subsystem Reset: Not Supported 00:15:02.999 Command Sets Supported 00:15:02.999 NVM Command Set: Supported 00:15:02.999 Boot Partition: Not Supported 00:15:02.999 Memory Page Size Minimum: 4096 bytes 00:15:02.999 Memory Page Size Maximum: 4096 bytes 00:15:02.999 Persistent Memory Region: Not Supported 00:15:02.999 Optional Asynchronous Events Supported 00:15:02.999 Namespace Attribute Notices: Supported 00:15:02.999 Firmware Activation Notices: Not Supported 00:15:02.999 ANA Change Notices: Not Supported 00:15:02.999 PLE Aggregate Log Change Notices: Not Supported 00:15:02.999 LBA Status Info Alert Notices: Not Supported 00:15:02.999 EGE Aggregate Log Change Notices: Not Supported 00:15:02.999 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.999 Zone Descriptor Change Notices: Not Supported 00:15:02.999 Discovery Log Change Notices: Not Supported 00:15:02.999 Controller Attributes 00:15:02.999 128-bit Host Identifier: Supported 00:15:02.999 Non-Operational Permissive Mode: Not Supported 00:15:02.999 NVM Sets: Not Supported 00:15:02.999 Read Recovery Levels: Not Supported 00:15:02.999 Endurance Groups: Not Supported 00:15:02.999 Predictable Latency Mode: Not Supported 00:15:02.999 Traffic Based Keep ALive: Not Supported 00:15:02.999 Namespace Granularity: Not Supported 00:15:02.999 SQ Associations: Not Supported 00:15:02.999 UUID List: Not Supported 00:15:02.999 Multi-Domain Subsystem: Not Supported 00:15:02.999 Fixed Capacity Management: Not Supported 00:15:02.999 Variable Capacity Management: Not Supported 00:15:02.999 Delete Endurance Group: Not Supported 00:15:02.999 Delete NVM Set: Not Supported 00:15:02.999 Extended LBA Formats Supported: Not Supported 00:15:02.999 Flexible Data Placement Supported: Not Supported 00:15:02.999 00:15:02.999 Controller Memory Buffer Support 00:15:02.999 ================================ 00:15:02.999 Supported: No 00:15:02.999 00:15:02.999 Persistent Memory Region Support 00:15:02.999 ================================ 00:15:02.999 Supported: No 00:15:02.999 00:15:02.999 Admin Command Set Attributes 00:15:02.999 ============================ 00:15:02.999 Security Send/Receive: Not Supported 00:15:02.999 Format NVM: Not Supported 00:15:02.999 Firmware Activate/Download: Not Supported 00:15:02.999 Namespace Management: Not Supported 00:15:02.999 Device Self-Test: Not Supported 00:15:02.999 Directives: Not Supported 00:15:02.999 NVMe-MI: Not Supported 00:15:02.999 Virtualization Management: Not Supported 00:15:02.999 Doorbell Buffer Config: Not Supported 00:15:02.999 Get LBA Status Capability: Not Supported 00:15:02.999 Command & Feature Lockdown Capability: Not Supported 00:15:02.999 Abort Command Limit: 4 00:15:02.999 Async Event Request Limit: 4 00:15:02.999 Number of Firmware Slots: N/A 00:15:02.999 Firmware Slot 1 Read-Only: N/A 00:15:02.999 Firmware Activation Without Reset: N/A 00:15:02.999 Multiple Update Detection Support: N/A 00:15:02.999 Firmware Update Granularity: No Information Provided 00:15:02.999 Per-Namespace SMART Log: No 00:15:02.999 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.999 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:02.999 Command Effects Log Page: Supported 00:15:02.999 Get Log Page Extended Data: Supported 00:15:02.999 Telemetry Log Pages: Not Supported 00:15:02.999 Persistent Event Log Pages: Not Supported 00:15:02.999 Supported Log Pages Log Page: May Support 00:15:02.999 Commands Supported & Effects Log Page: Not Supported 00:15:02.999 Feature Identifiers & Effects Log Page:May Support 00:15:02.999 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.999 Data Area 4 for Telemetry Log: Not Supported 00:15:02.999 Error Log Page Entries Supported: 128 00:15:02.999 Keep Alive: Supported 00:15:02.999 Keep Alive Granularity: 10000 ms 00:15:02.999 00:15:02.999 NVM Command Set Attributes 00:15:02.999 ========================== 00:15:02.999 Submission Queue Entry Size 00:15:02.999 Max: 64 00:15:02.999 Min: 64 00:15:02.999 Completion Queue Entry Size 00:15:02.999 Max: 16 00:15:02.999 Min: 16 00:15:02.999 Number of Namespaces: 32 00:15:02.999 Compare Command: Supported 00:15:02.999 Write Uncorrectable Command: Not Supported 00:15:02.999 Dataset Management Command: Supported 00:15:02.999 Write Zeroes Command: Supported 00:15:02.999 Set Features Save Field: Not Supported 00:15:02.999 Reservations: Not Supported 00:15:02.999 Timestamp: Not Supported 00:15:02.999 Copy: Supported 00:15:02.999 Volatile Write Cache: Present 00:15:02.999 Atomic Write Unit (Normal): 1 00:15:02.999 Atomic Write Unit (PFail): 1 00:15:02.999 Atomic Compare & Write Unit: 1 00:15:02.999 Fused Compare & Write: Supported 00:15:02.999 Scatter-Gather List 00:15:02.999 SGL Command Set: Supported (Dword aligned) 00:15:02.999 SGL Keyed: Not Supported 00:15:02.999 SGL Bit Bucket Descriptor: Not Supported 00:15:02.999 SGL Metadata Pointer: Not Supported 00:15:02.999 Oversized SGL: Not Supported 00:15:02.999 SGL Metadata Address: Not Supported 00:15:02.999 SGL Offset: Not Supported 00:15:02.999 Transport SGL Data Block: Not Supported 00:15:02.999 Replay Protected Memory Block: Not Supported 00:15:02.999 00:15:02.999 Firmware Slot Information 00:15:02.999 ========================= 00:15:02.999 Active slot: 1 00:15:02.999 Slot 1 Firmware Revision: 24.05.1 00:15:02.999 00:15:02.999 00:15:02.999 Commands Supported and Effects 00:15:02.999 ============================== 00:15:02.999 Admin Commands 00:15:02.999 -------------- 00:15:02.999 Get Log Page (02h): Supported 00:15:02.999 Identify (06h): Supported 00:15:02.999 Abort (08h): Supported 00:15:02.999 Set Features (09h): Supported 00:15:02.999 Get Features (0Ah): Supported 00:15:02.999 Asynchronous Event Request (0Ch): Supported 00:15:02.999 Keep Alive (18h): Supported 00:15:02.999 I/O Commands 00:15:02.999 ------------ 00:15:02.999 Flush (00h): Supported LBA-Change 00:15:02.999 Write (01h): Supported LBA-Change 00:15:02.999 Read (02h): Supported 00:15:02.999 Compare (05h): Supported 00:15:02.999 Write Zeroes (08h): Supported LBA-Change 00:15:02.999 Dataset Management (09h): Supported LBA-Change 00:15:02.999 Copy (19h): Supported LBA-Change 00:15:02.999 Unknown (79h): Supported LBA-Change 00:15:02.999 Unknown (7Ah): Supported 00:15:02.999 00:15:02.999 Error Log 00:15:02.999 ========= 00:15:02.999 00:15:02.999 Arbitration 00:15:02.999 =========== 00:15:02.999 Arbitration Burst: 1 00:15:02.999 00:15:02.999 Power Management 00:15:02.999 ================ 00:15:02.999 Number of Power States: 1 00:15:02.999 Current Power State: Power State #0 00:15:02.999 Power State #0: 00:15:02.999 Max Power: 0.00 W 00:15:02.999 Non-Operational State: Operational 00:15:02.999 Entry Latency: Not Reported 00:15:02.999 Exit Latency: Not Reported 00:15:02.999 Relative Read Throughput: 0 00:15:02.999 Relative Read Latency: 0 00:15:02.999 Relative Write Throughput: 0 00:15:02.999 Relative Write Latency: 0 00:15:02.999 Idle Power: Not Reported 00:15:02.999 Active Power: Not Reported 00:15:02.999 Non-Operational Permissive Mode: Not Supported 00:15:02.999 00:15:02.999 Health Information 00:15:02.999 ================== 00:15:02.999 Critical Warnings: 00:15:02.999 Available Spare Space: OK 00:15:02.999 Temperature: OK 00:15:02.999 Device Reliability: OK 00:15:02.999 Read Only: No 00:15:02.999 Volatile Memory Backup: OK 00:15:02.999 Current Temperature: 0 Kelvin[2024-07-15 23:58:37.393367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:02.999 [2024-07-15 23:58:37.401158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:02.999 [2024-07-15 23:58:37.401205] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:02.999 [2024-07-15 23:58:37.401224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.999 [2024-07-15 23:58:37.401236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.999 [2024-07-15 23:58:37.401248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.999 [2024-07-15 23:58:37.401259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.999 [2024-07-15 23:58:37.401353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:02.999 [2024-07-15 23:58:37.401377] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:03.000 [2024-07-15 23:58:37.402361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:03.000 [2024-07-15 23:58:37.402440] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:03.000 [2024-07-15 23:58:37.402456] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:03.000 [2024-07-15 23:58:37.403365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:03.000 [2024-07-15 23:58:37.403396] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:03.000 [2024-07-15 23:58:37.403472] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:03.000 [2024-07-15 23:58:37.406159] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:03.000 (-273 Celsius) 00:15:03.000 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:03.000 Available Spare: 0% 00:15:03.000 Available Spare Threshold: 0% 00:15:03.000 Life Percentage Used: 0% 00:15:03.000 Data Units Read: 0 00:15:03.000 Data Units Written: 0 00:15:03.000 Host Read Commands: 0 00:15:03.000 Host Write Commands: 0 00:15:03.000 Controller Busy Time: 0 minutes 00:15:03.000 Power Cycles: 0 00:15:03.000 Power On Hours: 0 hours 00:15:03.000 Unsafe Shutdowns: 0 00:15:03.000 Unrecoverable Media Errors: 0 00:15:03.000 Lifetime Error Log Entries: 0 00:15:03.000 Warning Temperature Time: 0 minutes 00:15:03.000 Critical Temperature Time: 0 minutes 00:15:03.000 00:15:03.000 Number of Queues 00:15:03.000 ================ 00:15:03.000 Number of I/O Submission Queues: 127 00:15:03.000 Number of I/O Completion Queues: 127 00:15:03.000 00:15:03.000 Active Namespaces 00:15:03.000 ================= 00:15:03.000 Namespace ID:1 00:15:03.000 Error Recovery Timeout: Unlimited 00:15:03.000 Command Set Identifier: NVM (00h) 00:15:03.000 Deallocate: Supported 00:15:03.000 Deallocated/Unwritten Error: Not Supported 00:15:03.000 Deallocated Read Value: Unknown 00:15:03.000 Deallocate in Write Zeroes: Not Supported 00:15:03.000 Deallocated Guard Field: 0xFFFF 00:15:03.000 Flush: Supported 00:15:03.000 Reservation: Supported 00:15:03.000 Namespace Sharing Capabilities: Multiple Controllers 00:15:03.000 Size (in LBAs): 131072 (0GiB) 00:15:03.000 Capacity (in LBAs): 131072 (0GiB) 00:15:03.000 Utilization (in LBAs): 131072 (0GiB) 00:15:03.000 NGUID: 15B97A8A62B64AEDAC67B92B48116007 00:15:03.000 UUID: 15b97a8a-62b6-4aed-ac67-b92b48116007 00:15:03.000 Thin Provisioning: Not Supported 00:15:03.000 Per-NS Atomic Units: Yes 00:15:03.000 Atomic Boundary Size (Normal): 0 00:15:03.000 Atomic Boundary Size (PFail): 0 00:15:03.000 Atomic Boundary Offset: 0 00:15:03.000 Maximum Single Source Range Length: 65535 00:15:03.000 Maximum Copy Length: 65535 00:15:03.000 Maximum Source Range Count: 1 00:15:03.000 NGUID/EUI64 Never Reused: No 00:15:03.000 Namespace Write Protected: No 00:15:03.000 Number of LBA Formats: 1 00:15:03.000 Current LBA Format: LBA Format #00 00:15:03.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:03.000 00:15:03.000 23:58:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:03.000 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.259 [2024-07-15 23:58:37.615465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.525 Initializing NVMe Controllers 00:15:08.525 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:08.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:08.525 Initialization complete. Launching workers. 00:15:08.525 ======================================================== 00:15:08.525 Latency(us) 00:15:08.525 Device Information : IOPS MiB/s Average min max 00:15:08.525 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24146.24 94.32 5300.96 1451.78 7630.33 00:15:08.525 ======================================================== 00:15:08.525 Total : 24146.24 94.32 5300.96 1451.78 7630.33 00:15:08.525 00:15:08.525 [2024-07-15 23:58:42.722467] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:08.525 23:58:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:08.525 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.525 [2024-07-15 23:58:42.949164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.844 Initializing NVMe Controllers 00:15:13.844 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.844 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:13.844 Initialization complete. Launching workers. 00:15:13.844 ======================================================== 00:15:13.844 Latency(us) 00:15:13.844 Device Information : IOPS MiB/s Average min max 00:15:13.844 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24074.40 94.04 5316.21 1464.61 10555.92 00:15:13.844 ======================================================== 00:15:13.844 Total : 24074.40 94.04 5316.21 1464.61 10555.92 00:15:13.844 00:15:13.844 [2024-07-15 23:58:47.966028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.844 23:58:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:13.844 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.844 [2024-07-15 23:58:48.189163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.107 [2024-07-15 23:58:53.339277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.107 Initializing NVMe Controllers 00:15:19.107 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.107 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:19.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:19.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:19.107 Initialization complete. Launching workers. 00:15:19.107 Starting thread on core 2 00:15:19.107 Starting thread on core 3 00:15:19.107 Starting thread on core 1 00:15:19.107 23:58:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:19.107 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.107 [2024-07-15 23:58:53.621288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.393 [2024-07-15 23:58:56.685946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.393 Initializing NVMe Controllers 00:15:22.393 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.393 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.393 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:22.393 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:22.393 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:22.393 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:22.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:22.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:22.393 Initialization complete. Launching workers. 00:15:22.393 Starting thread on core 1 with urgent priority queue 00:15:22.393 Starting thread on core 2 with urgent priority queue 00:15:22.393 Starting thread on core 3 with urgent priority queue 00:15:22.393 Starting thread on core 0 with urgent priority queue 00:15:22.393 SPDK bdev Controller (SPDK2 ) core 0: 6814.67 IO/s 14.67 secs/100000 ios 00:15:22.393 SPDK bdev Controller (SPDK2 ) core 1: 7299.00 IO/s 13.70 secs/100000 ios 00:15:22.393 SPDK bdev Controller (SPDK2 ) core 2: 7543.33 IO/s 13.26 secs/100000 ios 00:15:22.393 SPDK bdev Controller (SPDK2 ) core 3: 7258.33 IO/s 13.78 secs/100000 ios 00:15:22.393 ======================================================== 00:15:22.393 00:15:22.393 23:58:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:22.393 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.650 [2024-07-15 23:58:56.956305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.650 Initializing NVMe Controllers 00:15:22.650 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.650 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.650 Namespace ID: 1 size: 0GB 00:15:22.650 Initialization complete. 00:15:22.650 INFO: using host memory buffer for IO 00:15:22.650 Hello world! 00:15:22.650 [2024-07-15 23:58:56.967368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.650 23:58:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:22.650 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.907 [2024-07-15 23:58:57.238199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.840 Initializing NVMe Controllers 00:15:23.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.840 Initialization complete. Launching workers. 00:15:23.840 submit (in ns) avg, min, max = 8969.2, 4521.5, 4017573.3 00:15:23.840 complete (in ns) avg, min, max = 30619.7, 2653.3, 7009730.4 00:15:23.840 00:15:23.840 Submit histogram 00:15:23.840 ================ 00:15:23.840 Range in us Cumulative Count 00:15:23.840 4.504 - 4.527: 0.0422% ( 5) 00:15:23.840 4.527 - 4.551: 0.7590% ( 85) 00:15:23.840 4.551 - 4.575: 2.4458% ( 200) 00:15:23.840 4.575 - 4.599: 5.5748% ( 371) 00:15:23.840 4.599 - 4.622: 9.8929% ( 512) 00:15:23.840 4.622 - 4.646: 12.5833% ( 319) 00:15:23.840 4.646 - 4.670: 14.1098% ( 181) 00:15:23.840 4.670 - 4.693: 15.0460% ( 111) 00:15:23.840 4.693 - 4.717: 15.8472% ( 95) 00:15:23.840 4.717 - 4.741: 17.5255% ( 199) 00:15:23.840 4.741 - 4.764: 20.8147% ( 390) 00:15:23.840 4.764 - 4.788: 25.5967% ( 567) 00:15:23.840 4.788 - 4.812: 29.3160% ( 441) 00:15:23.840 4.812 - 4.836: 31.1377% ( 216) 00:15:23.840 4.836 - 4.859: 32.0907% ( 113) 00:15:23.840 4.859 - 4.883: 32.5715% ( 57) 00:15:23.840 4.883 - 4.907: 32.9088% ( 40) 00:15:23.840 4.907 - 4.930: 33.3137% ( 48) 00:15:23.840 4.930 - 4.954: 33.6763% ( 43) 00:15:23.840 4.954 - 4.978: 34.1486% ( 56) 00:15:23.840 4.978 - 5.001: 34.4269% ( 33) 00:15:23.840 5.001 - 5.025: 34.6799% ( 30) 00:15:23.840 5.025 - 5.049: 34.8823% ( 24) 00:15:23.840 5.049 - 5.073: 34.9751% ( 11) 00:15:23.840 5.073 - 5.096: 35.0173% ( 5) 00:15:23.840 5.096 - 5.120: 35.0679% ( 6) 00:15:23.840 5.120 - 5.144: 35.1860% ( 14) 00:15:23.840 5.144 - 5.167: 36.0378% ( 101) 00:15:23.840 5.167 - 5.191: 40.3728% ( 514) 00:15:23.841 5.191 - 5.215: 48.7644% ( 995) 00:15:23.841 5.215 - 5.239: 51.1006% ( 277) 00:15:23.841 5.239 - 5.262: 52.6440% ( 183) 00:15:23.841 5.262 - 5.286: 53.9175% ( 151) 00:15:23.841 5.286 - 5.310: 54.9549% ( 123) 00:15:23.841 5.310 - 5.333: 62.1574% ( 854) 00:15:23.841 5.333 - 5.357: 66.3574% ( 498) 00:15:23.841 5.357 - 5.381: 68.5249% ( 257) 00:15:23.841 5.381 - 5.404: 69.4189% ( 106) 00:15:23.841 5.404 - 5.428: 71.1225% ( 202) 00:15:23.841 5.428 - 5.452: 73.3659% ( 266) 00:15:23.841 5.452 - 5.476: 73.9732% ( 72) 00:15:23.841 5.476 - 5.499: 74.3105% ( 40) 00:15:23.841 5.499 - 5.523: 74.5298% ( 26) 00:15:23.841 5.523 - 5.547: 74.8840% ( 42) 00:15:23.841 5.547 - 5.570: 84.8950% ( 1187) 00:15:23.841 5.570 - 5.594: 89.1457% ( 504) 00:15:23.841 5.594 - 5.618: 91.5830% ( 289) 00:15:23.841 5.618 - 5.641: 92.5108% ( 110) 00:15:23.841 5.641 - 5.665: 93.1180% ( 72) 00:15:23.841 5.665 - 5.689: 93.3457% ( 27) 00:15:23.841 5.689 - 5.713: 93.5059% ( 19) 00:15:23.841 5.713 - 5.736: 93.5819% ( 9) 00:15:23.841 5.736 - 5.760: 93.6325% ( 6) 00:15:23.841 5.760 - 5.784: 93.6831% ( 6) 00:15:23.841 5.784 - 5.807: 93.8686% ( 22) 00:15:23.841 5.807 - 5.831: 93.9782% ( 13) 00:15:23.841 5.831 - 5.855: 94.1300% ( 18) 00:15:23.841 5.855 - 5.879: 94.2566% ( 15) 00:15:23.841 5.879 - 5.902: 94.3662% ( 13) 00:15:23.841 5.902 - 5.926: 94.4590% ( 11) 00:15:23.841 5.926 - 5.950: 94.5349% ( 9) 00:15:23.841 5.950 - 5.973: 94.5939% ( 7) 00:15:23.841 5.973 - 5.997: 94.6108% ( 2) 00:15:23.841 5.997 - 6.021: 94.7373% ( 15) 00:15:23.841 6.021 - 6.044: 94.7710% ( 4) 00:15:23.841 6.044 - 6.068: 94.7795% ( 1) 00:15:23.841 6.068 - 6.116: 94.9650% ( 22) 00:15:23.841 6.116 - 6.163: 95.0578% ( 11) 00:15:23.841 6.163 - 6.210: 95.1505% ( 11) 00:15:23.841 6.210 - 6.258: 95.2686% ( 14) 00:15:23.841 6.258 - 6.305: 95.3108% ( 5) 00:15:23.841 6.305 - 6.353: 95.8337% ( 62) 00:15:23.841 6.353 - 6.400: 95.8927% ( 7) 00:15:23.841 6.400 - 6.447: 95.9265% ( 4) 00:15:23.841 6.447 - 6.495: 96.2469% ( 38) 00:15:23.841 6.495 - 6.542: 96.3566% ( 13) 00:15:23.841 6.542 - 6.590: 96.3988% ( 5) 00:15:23.841 6.590 - 6.637: 96.5084% ( 13) 00:15:23.841 6.637 - 6.684: 96.6939% ( 22) 00:15:23.841 6.684 - 6.732: 96.7783% ( 10) 00:15:23.841 6.732 - 6.779: 96.8373% ( 7) 00:15:23.841 6.779 - 6.827: 96.9048% ( 8) 00:15:23.841 6.827 - 6.874: 97.0819% ( 21) 00:15:23.841 6.874 - 6.921: 98.3217% ( 147) 00:15:23.841 6.921 - 6.969: 98.7771% ( 54) 00:15:23.841 6.969 - 7.016: 98.9289% ( 18) 00:15:23.841 7.016 - 7.064: 98.9626% ( 4) 00:15:23.841 7.064 - 7.111: 98.9879% ( 3) 00:15:23.841 7.111 - 7.159: 99.0217% ( 4) 00:15:23.841 7.159 - 7.206: 99.0470% ( 3) 00:15:23.841 7.206 - 7.253: 99.0554% ( 1) 00:15:23.841 7.253 - 7.301: 99.0638% ( 1) 00:15:23.841 7.301 - 7.348: 99.0723% ( 1) 00:15:23.841 7.490 - 7.538: 99.0807% ( 1) 00:15:23.841 7.585 - 7.633: 99.0891% ( 1) 00:15:23.841 7.633 - 7.680: 99.0976% ( 1) 00:15:23.841 7.727 - 7.775: 99.1229% ( 3) 00:15:23.841 7.917 - 7.964: 99.1397% ( 2) 00:15:23.841 7.964 - 8.012: 99.1566% ( 2) 00:15:23.841 8.012 - 8.059: 99.1651% ( 1) 00:15:23.841 8.059 - 8.107: 99.1735% ( 1) 00:15:23.841 8.107 - 8.154: 99.1904% ( 2) 00:15:23.841 8.154 - 8.201: 99.1988% ( 1) 00:15:23.841 8.201 - 8.249: 99.2072% ( 1) 00:15:23.841 8.296 - 8.344: 99.2241% ( 2) 00:15:23.841 8.391 - 8.439: 99.2325% ( 1) 00:15:23.841 8.439 - 8.486: 99.2494% ( 2) 00:15:23.841 8.770 - 8.818: 99.2578% ( 1) 00:15:23.841 8.818 - 8.865: 99.2747% ( 2) 00:15:23.841 8.960 - 9.007: 99.3000% ( 3) 00:15:23.841 9.007 - 9.055: 99.3337% ( 4) 00:15:23.841 9.102 - 9.150: 99.3422% ( 1) 00:15:23.841 9.150 - 9.197: 99.3506% ( 1) 00:15:23.841 9.244 - 9.292: 99.3675% ( 2) 00:15:23.841 9.339 - 9.387: 99.3843% ( 2) 00:15:23.841 9.434 - 9.481: 99.4012% ( 2) 00:15:23.841 9.481 - 9.529: 99.4181% ( 2) 00:15:23.841 9.529 - 9.576: 99.4434% ( 3) 00:15:23.841 9.576 - 9.624: 99.4518% ( 1) 00:15:23.841 9.624 - 9.671: 99.4602% ( 1) 00:15:23.841 9.671 - 9.719: 99.4687% ( 1) 00:15:23.841 9.719 - 9.766: 99.4771% ( 1) 00:15:23.841 9.861 - 9.908: 99.4855% ( 1) 00:15:23.841 9.908 - 9.956: 99.4940% ( 1) 00:15:23.841 10.003 - 10.050: 99.5108% ( 2) 00:15:23.841 10.050 - 10.098: 99.5193% ( 1) 00:15:23.841 10.145 - 10.193: 99.5361% ( 2) 00:15:23.841 10.240 - 10.287: 99.5446% ( 1) 00:15:23.841 10.572 - 10.619: 99.5530% ( 1) 00:15:23.841 10.619 - 10.667: 99.5614% ( 1) 00:15:23.841 10.667 - 10.714: 99.5699% ( 1) 00:15:23.841 10.714 - 10.761: 99.5783% ( 1) 00:15:23.841 10.809 - 10.856: 99.5867% ( 1) 00:15:23.841 10.999 - 11.046: 99.5952% ( 1) 00:15:23.841 11.046 - 11.093: 99.6036% ( 1) 00:15:23.841 11.093 - 11.141: 99.6120% ( 1) 00:15:23.841 11.378 - 11.425: 99.6205% ( 1) 00:15:23.841 11.425 - 11.473: 99.6373% ( 2) 00:15:23.841 11.520 - 11.567: 99.6458% ( 1) 00:15:23.841 11.662 - 11.710: 99.6542% ( 1) 00:15:23.841 12.326 - 12.421: 99.6626% ( 1) 00:15:23.841 12.421 - 12.516: 99.6711% ( 1) 00:15:23.841 12.895 - 12.990: 99.6795% ( 1) 00:15:23.841 12.990 - 13.084: 99.6879% ( 1) 00:15:23.841 13.179 - 13.274: 99.7048% ( 2) 00:15:23.841 13.369 - 13.464: 99.7217% ( 2) 00:15:23.841 13.464 - 13.559: 99.7301% ( 1) 00:15:23.841 13.559 - 13.653: 99.7386% ( 1) 00:15:23.841 13.653 - 13.748: 99.7639% ( 3) 00:15:23.841 13.748 - 13.843: 99.7807% ( 2) 00:15:23.841 13.843 - 13.938: 99.8398% ( 7) 00:15:23.841 13.938 - 14.033: 99.8651% ( 3) 00:15:23.841 14.033 - 14.127: 99.8735% ( 1) 00:15:23.841 14.317 - 14.412: 99.8819% ( 1) 00:15:23.841 14.507 - 14.601: 99.8904% ( 1) 00:15:23.841 15.834 - 15.929: 99.8988% ( 1) 00:15:23.841 16.972 - 17.067: 99.9072% ( 1) 00:15:23.841 3980.705 - 4004.978: 99.9410% ( 4) 00:15:23.841 4004.978 - 4029.250: 100.0000% ( 7) 00:15:23.841 00:15:23.841 Complete histogram 00:15:23.841 ================== 00:15:23.841 Range in us Cumulative Count 00:15:23.841 2.643 - 2.655: 0.0084% ( 1) 00:15:23.841 2.655 - 2.667: 3.3988% ( 402) 00:15:23.841 2.667 - 2.679: 45.7367% ( 5020) 00:15:23.841 2.679 - 2.690: 72.1936% ( 3137) 00:15:23.841 2.690 - 2.702: 77.5576% ( 636) 00:15:23.841 2.702 - 2.7[2024-07-15 23:58:58.340359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.098 14: 83.5456% ( 710) 00:15:24.098 2.714 - 2.726: 89.4746% ( 703) 00:15:24.098 2.726 - 2.738: 94.3325% ( 576) 00:15:24.098 2.738 - 2.750: 96.6096% ( 270) 00:15:24.098 2.750 - 2.761: 97.3096% ( 83) 00:15:24.098 2.761 - 2.773: 97.7819% ( 56) 00:15:24.098 2.773 - 2.785: 98.1277% ( 41) 00:15:24.098 2.785 - 2.797: 98.4060% ( 33) 00:15:24.098 2.797 - 2.809: 98.5662% ( 19) 00:15:24.098 2.809 - 2.821: 98.6337% ( 8) 00:15:24.098 2.821 - 2.833: 98.7012% ( 8) 00:15:24.098 2.833 - 2.844: 98.7349% ( 4) 00:15:24.098 2.844 - 2.856: 98.7518% ( 2) 00:15:24.098 2.856 - 2.868: 98.7602% ( 1) 00:15:24.098 2.880 - 2.892: 98.7687% ( 1) 00:15:24.098 2.892 - 2.904: 98.7771% ( 1) 00:15:24.098 2.904 - 2.916: 98.8024% ( 3) 00:15:24.098 2.916 - 2.927: 98.8193% ( 2) 00:15:24.098 2.927 - 2.939: 98.8446% ( 3) 00:15:24.098 2.951 - 2.963: 98.8699% ( 3) 00:15:24.098 2.963 - 2.975: 98.8783% ( 1) 00:15:24.098 2.987 - 2.999: 98.8867% ( 1) 00:15:24.098 3.366 - 3.390: 98.8952% ( 1) 00:15:24.098 3.390 - 3.413: 98.9205% ( 3) 00:15:24.098 3.413 - 3.437: 98.9373% ( 2) 00:15:24.098 3.437 - 3.461: 98.9711% ( 4) 00:15:24.098 3.484 - 3.508: 98.9879% ( 2) 00:15:24.098 3.508 - 3.532: 99.0385% ( 6) 00:15:24.098 3.532 - 3.556: 99.0723% ( 4) 00:15:24.098 3.556 - 3.579: 99.0891% ( 2) 00:15:24.098 3.603 - 3.627: 99.1313% ( 5) 00:15:24.098 3.650 - 3.674: 99.1397% ( 1) 00:15:24.098 3.721 - 3.745: 99.1482% ( 1) 00:15:24.098 3.816 - 3.840: 99.1566% ( 1) 00:15:24.098 5.784 - 5.807: 99.1651% ( 1) 00:15:24.098 5.879 - 5.902: 99.1735% ( 1) 00:15:24.098 5.926 - 5.950: 99.1819% ( 1) 00:15:24.098 6.116 - 6.163: 99.1904% ( 1) 00:15:24.098 6.353 - 6.400: 99.1988% ( 1) 00:15:24.098 6.542 - 6.590: 99.2157% ( 2) 00:15:24.098 6.590 - 6.637: 99.2241% ( 1) 00:15:24.098 6.684 - 6.732: 99.2325% ( 1) 00:15:24.098 6.732 - 6.779: 99.2410% ( 1) 00:15:24.098 6.921 - 6.969: 99.2494% ( 1) 00:15:24.098 7.253 - 7.301: 99.2578% ( 1) 00:15:24.098 7.585 - 7.633: 99.2747% ( 2) 00:15:24.098 8.439 - 8.486: 99.2831% ( 1) 00:15:24.098 13.748 - 13.843: 99.2916% ( 1) 00:15:24.098 14.033 - 14.127: 99.3000% ( 1) 00:15:24.098 24.273 - 24.462: 99.3084% ( 1) 00:15:24.098 3980.705 - 4004.978: 99.7976% ( 58) 00:15:24.098 4004.978 - 4029.250: 99.9916% ( 23) 00:15:24.098 6990.507 - 7039.052: 100.0000% ( 1) 00:15:24.098 00:15:24.098 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:24.098 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:24.098 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:24.098 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:24.098 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:24.355 [ 00:15:24.355 { 00:15:24.355 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.355 "subtype": "Discovery", 00:15:24.355 "listen_addresses": [], 00:15:24.355 "allow_any_host": true, 00:15:24.355 "hosts": [] 00:15:24.355 }, 00:15:24.355 { 00:15:24.355 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:24.355 "subtype": "NVMe", 00:15:24.355 "listen_addresses": [ 00:15:24.355 { 00:15:24.355 "trtype": "VFIOUSER", 00:15:24.355 "adrfam": "IPv4", 00:15:24.355 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:24.355 "trsvcid": "0" 00:15:24.355 } 00:15:24.355 ], 00:15:24.355 "allow_any_host": true, 00:15:24.355 "hosts": [], 00:15:24.355 "serial_number": "SPDK1", 00:15:24.355 "model_number": "SPDK bdev Controller", 00:15:24.355 "max_namespaces": 32, 00:15:24.355 "min_cntlid": 1, 00:15:24.355 "max_cntlid": 65519, 00:15:24.355 "namespaces": [ 00:15:24.355 { 00:15:24.355 "nsid": 1, 00:15:24.355 "bdev_name": "Malloc1", 00:15:24.355 "name": "Malloc1", 00:15:24.355 "nguid": "D852A24E80274F29BE2EC34AC06416A8", 00:15:24.355 "uuid": "d852a24e-8027-4f29-be2e-c34ac06416a8" 00:15:24.355 }, 00:15:24.355 { 00:15:24.355 "nsid": 2, 00:15:24.355 "bdev_name": "Malloc3", 00:15:24.355 "name": "Malloc3", 00:15:24.355 "nguid": "F8C355A0526244BD8B9CB72BCDB1AA3B", 00:15:24.355 "uuid": "f8c355a0-5262-44bd-8b9c-b72bcdb1aa3b" 00:15:24.355 } 00:15:24.355 ] 00:15:24.355 }, 00:15:24.355 { 00:15:24.355 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:24.355 "subtype": "NVMe", 00:15:24.355 "listen_addresses": [ 00:15:24.355 { 00:15:24.355 "trtype": "VFIOUSER", 00:15:24.355 "adrfam": "IPv4", 00:15:24.355 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:24.355 "trsvcid": "0" 00:15:24.355 } 00:15:24.355 ], 00:15:24.355 "allow_any_host": true, 00:15:24.355 "hosts": [], 00:15:24.355 "serial_number": "SPDK2", 00:15:24.355 "model_number": "SPDK bdev Controller", 00:15:24.355 "max_namespaces": 32, 00:15:24.355 "min_cntlid": 1, 00:15:24.355 "max_cntlid": 65519, 00:15:24.355 "namespaces": [ 00:15:24.355 { 00:15:24.355 "nsid": 1, 00:15:24.355 "bdev_name": "Malloc2", 00:15:24.355 "name": "Malloc2", 00:15:24.355 "nguid": "15B97A8A62B64AEDAC67B92B48116007", 00:15:24.355 "uuid": "15b97a8a-62b6-4aed-ac67-b92b48116007" 00:15:24.355 } 00:15:24.355 ] 00:15:24.355 } 00:15:24.355 ] 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1235860 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:24.355 23:58:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:24.355 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.355 [2024-07-15 23:58:58.854685] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.612 Malloc4 00:15:24.612 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:24.870 [2024-07-15 23:58:59.308241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.870 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:24.870 Asynchronous Event Request test 00:15:24.870 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.870 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:24.870 Registering asynchronous event callbacks... 00:15:24.870 Starting namespace attribute notice tests for all controllers... 00:15:24.870 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:24.870 aer_cb - Changed Namespace 00:15:24.870 Cleaning up... 00:15:25.127 [ 00:15:25.127 { 00:15:25.127 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:25.127 "subtype": "Discovery", 00:15:25.127 "listen_addresses": [], 00:15:25.127 "allow_any_host": true, 00:15:25.127 "hosts": [] 00:15:25.127 }, 00:15:25.127 { 00:15:25.127 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:25.127 "subtype": "NVMe", 00:15:25.127 "listen_addresses": [ 00:15:25.127 { 00:15:25.127 "trtype": "VFIOUSER", 00:15:25.127 "adrfam": "IPv4", 00:15:25.127 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:25.127 "trsvcid": "0" 00:15:25.127 } 00:15:25.127 ], 00:15:25.127 "allow_any_host": true, 00:15:25.127 "hosts": [], 00:15:25.127 "serial_number": "SPDK1", 00:15:25.127 "model_number": "SPDK bdev Controller", 00:15:25.127 "max_namespaces": 32, 00:15:25.127 "min_cntlid": 1, 00:15:25.127 "max_cntlid": 65519, 00:15:25.127 "namespaces": [ 00:15:25.127 { 00:15:25.127 "nsid": 1, 00:15:25.127 "bdev_name": "Malloc1", 00:15:25.127 "name": "Malloc1", 00:15:25.127 "nguid": "D852A24E80274F29BE2EC34AC06416A8", 00:15:25.127 "uuid": "d852a24e-8027-4f29-be2e-c34ac06416a8" 00:15:25.127 }, 00:15:25.127 { 00:15:25.127 "nsid": 2, 00:15:25.127 "bdev_name": "Malloc3", 00:15:25.127 "name": "Malloc3", 00:15:25.127 "nguid": "F8C355A0526244BD8B9CB72BCDB1AA3B", 00:15:25.127 "uuid": "f8c355a0-5262-44bd-8b9c-b72bcdb1aa3b" 00:15:25.127 } 00:15:25.127 ] 00:15:25.127 }, 00:15:25.127 { 00:15:25.127 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:25.127 "subtype": "NVMe", 00:15:25.127 "listen_addresses": [ 00:15:25.127 { 00:15:25.127 "trtype": "VFIOUSER", 00:15:25.127 "adrfam": "IPv4", 00:15:25.127 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:25.127 "trsvcid": "0" 00:15:25.127 } 00:15:25.127 ], 00:15:25.127 "allow_any_host": true, 00:15:25.127 "hosts": [], 00:15:25.127 "serial_number": "SPDK2", 00:15:25.127 "model_number": "SPDK bdev Controller", 00:15:25.127 "max_namespaces": 32, 00:15:25.127 "min_cntlid": 1, 00:15:25.127 "max_cntlid": 65519, 00:15:25.127 "namespaces": [ 00:15:25.127 { 00:15:25.127 "nsid": 1, 00:15:25.127 "bdev_name": "Malloc2", 00:15:25.127 "name": "Malloc2", 00:15:25.127 "nguid": "15B97A8A62B64AEDAC67B92B48116007", 00:15:25.127 "uuid": "15b97a8a-62b6-4aed-ac67-b92b48116007" 00:15:25.127 }, 00:15:25.127 { 00:15:25.127 "nsid": 2, 00:15:25.128 "bdev_name": "Malloc4", 00:15:25.128 "name": "Malloc4", 00:15:25.128 "nguid": "E5F578C579B34E3F80743BFB8DFF4B52", 00:15:25.128 "uuid": "e5f578c5-79b3-4e3f-8074-3bfb8dff4b52" 00:15:25.128 } 00:15:25.128 ] 00:15:25.128 } 00:15:25.128 ] 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1235860 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1231697 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1231697 ']' 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1231697 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:25.128 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1231697 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1231697' 00:15:25.386 killing process with pid 1231697 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1231697 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1231697 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:25.386 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1236000 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1236000' 00:15:25.387 Process pid: 1236000 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1236000 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1236000 ']' 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:25.387 23:58:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.645 [2024-07-15 23:58:59.927916] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:25.645 [2024-07-15 23:58:59.929203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:25.645 [2024-07-15 23:58:59.929273] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.645 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.645 [2024-07-15 23:58:59.989513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.645 [2024-07-15 23:59:00.083649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.645 [2024-07-15 23:59:00.083710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.645 [2024-07-15 23:59:00.083727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.645 [2024-07-15 23:59:00.083741] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.645 [2024-07-15 23:59:00.083753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.645 [2024-07-15 23:59:00.083834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.645 [2024-07-15 23:59:00.083887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.645 [2024-07-15 23:59:00.083937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.645 [2024-07-15 23:59:00.083940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.902 [2024-07-15 23:59:00.182927] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:25.902 [2024-07-15 23:59:00.183133] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:25.902 [2024-07-15 23:59:00.183403] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:25.902 [2024-07-15 23:59:00.183895] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:25.902 [2024-07-15 23:59:00.184156] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:25.902 23:59:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:25.902 23:59:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:25.902 23:59:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:26.834 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:27.093 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:27.093 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:27.093 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.093 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:27.093 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:27.353 Malloc1 00:15:27.353 23:59:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:27.612 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:27.870 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:28.436 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.436 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:28.436 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:28.436 Malloc2 00:15:28.436 23:59:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:28.694 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:28.950 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1236000 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1236000 ']' 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1236000 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1236000 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1236000' 00:15:29.208 killing process with pid 1236000 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1236000 00:15:29.208 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1236000 00:15:29.466 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:29.466 23:59:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:29.466 00:15:29.466 real 0m52.599s 00:15:29.466 user 3m27.962s 00:15:29.466 sys 0m4.228s 00:15:29.466 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.466 23:59:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:29.466 ************************************ 00:15:29.466 END TEST nvmf_vfio_user 00:15:29.466 ************************************ 00:15:29.466 23:59:03 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:29.466 23:59:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:29.466 23:59:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:29.466 23:59:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.467 ************************************ 00:15:29.467 START TEST nvmf_vfio_user_nvme_compliance 00:15:29.467 ************************************ 00:15:29.467 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:29.467 * Looking for test storage... 00:15:29.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:29.467 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.725 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1236363 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1236363' 00:15:29.726 Process pid: 1236363 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1236363 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1236363 ']' 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.726 23:59:03 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:29.726 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.726 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:29.726 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:29.726 [2024-07-15 23:59:04.047669] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:29.726 [2024-07-15 23:59:04.047777] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.726 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.726 [2024-07-15 23:59:04.107812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.726 [2024-07-15 23:59:04.194665] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.726 [2024-07-15 23:59:04.194722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.726 [2024-07-15 23:59:04.194738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.726 [2024-07-15 23:59:04.194752] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.726 [2024-07-15 23:59:04.194764] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.726 [2024-07-15 23:59:04.194847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.726 [2024-07-15 23:59:04.194904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.726 [2024-07-15 23:59:04.194901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.984 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:29.985 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:29.985 23:59:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 malloc0 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.918 23:59:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:30.918 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.176 00:15:31.176 00:15:31.176 CUnit - A unit testing framework for C - Version 2.1-3 00:15:31.176 http://cunit.sourceforge.net/ 00:15:31.176 00:15:31.176 00:15:31.176 Suite: nvme_compliance 00:15:31.176 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 23:59:05.526718] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.176 [2024-07-15 23:59:05.528236] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:31.176 [2024-07-15 23:59:05.528264] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:31.176 [2024-07-15 23:59:05.528280] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:31.176 [2024-07-15 23:59:05.529744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.176 passed 00:15:31.176 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 23:59:05.631476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.176 [2024-07-15 23:59:05.634501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.176 passed 00:15:31.434 Test: admin_identify_ns ...[2024-07-15 23:59:05.736153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.434 [2024-07-15 23:59:05.798188] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:31.434 [2024-07-15 23:59:05.806161] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:31.434 [2024-07-15 23:59:05.827319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.434 passed 00:15:31.434 Test: admin_get_features_mandatory_features ...[2024-07-15 23:59:05.923434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.434 [2024-07-15 23:59:05.926452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.692 passed 00:15:31.692 Test: admin_get_features_optional_features ...[2024-07-15 23:59:06.026084] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.692 [2024-07-15 23:59:06.032111] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.692 passed 00:15:31.692 Test: admin_set_features_number_of_queues ...[2024-07-15 23:59:06.129272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.957 [2024-07-15 23:59:06.236296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.958 passed 00:15:31.958 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 23:59:06.331444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.958 [2024-07-15 23:59:06.335474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.958 passed 00:15:31.958 Test: admin_get_log_page_with_lpo ...[2024-07-15 23:59:06.433619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.261 [2024-07-15 23:59:06.502162] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:32.261 [2024-07-15 23:59:06.515224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.261 passed 00:15:32.261 Test: fabric_property_get ...[2024-07-15 23:59:06.615371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.261 [2024-07-15 23:59:06.616694] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:32.261 [2024-07-15 23:59:06.618395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.261 passed 00:15:32.261 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 23:59:06.716067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.261 [2024-07-15 23:59:06.717408] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:32.261 [2024-07-15 23:59:06.719107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.519 passed 00:15:32.520 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 23:59:06.818143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.520 [2024-07-15 23:59:06.903155] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.520 [2024-07-15 23:59:06.919146] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.520 [2024-07-15 23:59:06.924296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.520 passed 00:15:32.520 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 23:59:07.021562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.520 [2024-07-15 23:59:07.022895] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:32.520 [2024-07-15 23:59:07.025602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.777 passed 00:15:32.777 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 23:59:07.123153] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:32.777 [2024-07-15 23:59:07.201150] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:32.777 [2024-07-15 23:59:07.225165] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:32.777 [2024-07-15 23:59:07.230328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.777 passed 00:15:33.035 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 23:59:07.327657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.035 [2024-07-15 23:59:07.329009] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:33.035 [2024-07-15 23:59:07.329053] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:33.035 [2024-07-15 23:59:07.332711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.035 passed 00:15:33.035 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 23:59:07.429152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.035 [2024-07-15 23:59:07.523147] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:33.035 [2024-07-15 23:59:07.531154] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:33.035 [2024-07-15 23:59:07.539162] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:33.035 [2024-07-15 23:59:07.547159] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:33.293 [2024-07-15 23:59:07.576286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.293 passed 00:15:33.293 Test: admin_create_io_sq_verify_pc ...[2024-07-15 23:59:07.672521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.293 [2024-07-15 23:59:07.689163] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:33.293 [2024-07-15 23:59:07.707065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.293 passed 00:15:33.293 Test: admin_create_io_qp_max_qps ...[2024-07-15 23:59:07.804744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:34.669 [2024-07-15 23:59:08.913163] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:34.927 [2024-07-15 23:59:09.291262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:34.927 passed 00:15:34.927 Test: admin_create_io_sq_shared_cq ...[2024-07-15 23:59:09.389174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:35.186 [2024-07-15 23:59:09.523155] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:35.186 [2024-07-15 23:59:09.560241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:35.186 passed 00:15:35.186 00:15:35.186 Run Summary: Type Total Ran Passed Failed Inactive 00:15:35.186 suites 1 1 n/a 0 0 00:15:35.186 tests 18 18 18 0 0 00:15:35.186 asserts 360 360 360 0 n/a 00:15:35.186 00:15:35.186 Elapsed time = 1.699 seconds 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1236363 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1236363 ']' 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1236363 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1236363 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1236363' 00:15:35.186 killing process with pid 1236363 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1236363 00:15:35.186 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1236363 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:35.445 00:15:35.445 real 0m5.888s 00:15:35.445 user 0m16.706s 00:15:35.445 sys 0m0.514s 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:35.445 ************************************ 00:15:35.445 END TEST nvmf_vfio_user_nvme_compliance 00:15:35.445 ************************************ 00:15:35.445 23:59:09 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.445 23:59:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:35.445 23:59:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:35.445 23:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.445 ************************************ 00:15:35.445 START TEST nvmf_vfio_user_fuzz 00:15:35.445 ************************************ 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:35.445 * Looking for test storage... 00:15:35.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1236985 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:35.445 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1236985' 00:15:35.445 Process pid: 1236985 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1236985 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1236985 ']' 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:35.446 23:59:09 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.012 23:59:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:36.012 23:59:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:36.012 23:59:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 malloc0 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:36.947 23:59:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:09.017 Fuzzing completed. Shutting down the fuzz application 00:16:09.017 00:16:09.017 Dumping successful admin opcodes: 00:16:09.017 8, 9, 10, 24, 00:16:09.017 Dumping successful io opcodes: 00:16:09.017 0, 00:16:09.017 NS: 0x200003a1ef00 I/O qp, Total commands completed: 547921, total successful commands: 2106, random_seed: 783735936 00:16:09.017 NS: 0x200003a1ef00 admin qp, Total commands completed: 86587, total successful commands: 691, random_seed: 3202990272 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1236985 ']' 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1236985' 00:16:09.017 killing process with pid 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1236985 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:09.017 00:16:09.017 real 0m32.127s 00:16:09.017 user 0m31.871s 00:16:09.017 sys 0m27.828s 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:09.017 23:59:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:09.017 ************************************ 00:16:09.017 END TEST nvmf_vfio_user_fuzz 00:16:09.017 ************************************ 00:16:09.017 23:59:42 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:09.017 23:59:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:09.017 23:59:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:09.017 23:59:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.017 ************************************ 00:16:09.017 START TEST nvmf_host_management 00:16:09.017 ************************************ 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:09.017 * Looking for test storage... 00:16:09.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.017 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.018 23:59:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.277 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:09.278 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:09.278 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:09.278 Found net devices under 0000:08:00.0: cvl_0_0 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:09.278 Found net devices under 0000:08:00.1: cvl_0_1 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.278 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:16:09.547 00:16:09.547 --- 10.0.0.2 ping statistics --- 00:16:09.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.547 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:16:09.547 00:16:09.547 --- 10.0.0.1 ping statistics --- 00:16:09.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.547 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1240976 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1240976 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1240976 ']' 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:09.547 23:59:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.547 [2024-07-15 23:59:43.942008] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:09.547 [2024-07-15 23:59:43.942098] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.547 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.547 [2024-07-15 23:59:44.008358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.806 [2024-07-15 23:59:44.097808] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.806 [2024-07-15 23:59:44.097861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.806 [2024-07-15 23:59:44.097877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.806 [2024-07-15 23:59:44.097891] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.806 [2024-07-15 23:59:44.097903] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.806 [2024-07-15 23:59:44.097986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.806 [2024-07-15 23:59:44.098037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.806 [2024-07-15 23:59:44.098087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:09.806 [2024-07-15 23:59:44.098090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.806 [2024-07-15 23:59:44.232641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:09.806 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.807 Malloc0 00:16:09.807 [2024-07-15 23:59:44.290107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.807 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1241110 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1241110 /var/tmp/bdevperf.sock 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1241110 ']' 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.065 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:10.065 { 00:16:10.065 "params": { 00:16:10.065 "name": "Nvme$subsystem", 00:16:10.065 "trtype": "$TEST_TRANSPORT", 00:16:10.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.065 "adrfam": "ipv4", 00:16:10.065 "trsvcid": "$NVMF_PORT", 00:16:10.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.065 "hdgst": ${hdgst:-false}, 00:16:10.065 "ddgst": ${ddgst:-false} 00:16:10.066 }, 00:16:10.066 "method": "bdev_nvme_attach_controller" 00:16:10.066 } 00:16:10.066 EOF 00:16:10.066 )") 00:16:10.066 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:10.066 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:10.066 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:10.066 23:59:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:10.066 "params": { 00:16:10.066 "name": "Nvme0", 00:16:10.066 "trtype": "tcp", 00:16:10.066 "traddr": "10.0.0.2", 00:16:10.066 "adrfam": "ipv4", 00:16:10.066 "trsvcid": "4420", 00:16:10.066 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:10.066 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:10.066 "hdgst": false, 00:16:10.066 "ddgst": false 00:16:10.066 }, 00:16:10.066 "method": "bdev_nvme_attach_controller" 00:16:10.066 }' 00:16:10.066 [2024-07-15 23:59:44.372904] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:10.066 [2024-07-15 23:59:44.372997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241110 ] 00:16:10.066 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.066 [2024-07-15 23:59:44.433513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.066 [2024-07-15 23:59:44.520818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.324 Running I/O for 10 seconds... 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:10.324 23:59:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=509 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 509 -ge 100 ']' 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.583 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.583 [2024-07-15 23:59:45.096695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.583 [2024-07-15 23:59:45.096862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168c7a0 is same with the state(5) to be set 00:16:10.842 [2024-07-15 23:59:45.100117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.842 [2024-07-15 23:59:45.100171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.842 [2024-07-15 23:59:45.100217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.842 [2024-07-15 23:59:45.100259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:10.842 [2024-07-15 23:59:45.100293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7a70 is same with the state(5) to be set 00:16:10.842 [2024-07-15 23:59:45.100391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.842 [2024-07-15 23:59:45.100592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.842 [2024-07-15 23:59:45.100609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.100966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.100985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.843 [2024-07-15 23:59:45.101248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:10.843 [2024-07-15 23:59:45.101424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.843 [2024-07-15 23:59:45.101567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:1 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.843 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.101964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.101981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.102000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.102016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.102035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.102052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.843 [2024-07-15 23:59:45.102070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.843 [2024-07-15 23:59:45.102090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.844 [2024-07-15 23:59:45.102704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.844 [2024-07-15 23:59:45.102779] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24f1da0 was disconnected and freed. reset controller. 00:16:10.844 [2024-07-15 23:59:45.104070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:10.844 task offset: 73728 on job bdev=Nvme0n1 fails 00:16:10.844 00:16:10.844 Latency(us) 00:16:10.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.844 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:10.844 Job: Nvme0n1 ended in about 0.42 seconds with error 00:16:10.844 Verification LBA range: start 0x0 length 0x400 00:16:10.844 Nvme0n1 : 0.42 1373.86 85.87 152.65 0.00 40506.56 3446.71 40389.59 00:16:10.844 =================================================================================================================== 00:16:10.844 Total : 1373.86 85.87 152.65 0.00 40506.56 3446.71 40389.59 00:16:10.844 [2024-07-15 23:59:45.106447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:10.844 [2024-07-15 23:59:45.106480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24f7a70 (9): Bad file descriptor 00:16:10.844 23:59:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.844 23:59:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:10.844 [2024-07-15 23:59:45.115403] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.777 23:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1241110 00:16:11.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1241110) - No such process 00:16:11.777 23:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:11.777 23:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:11.777 23:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:11.778 { 00:16:11.778 "params": { 00:16:11.778 "name": "Nvme$subsystem", 00:16:11.778 "trtype": "$TEST_TRANSPORT", 00:16:11.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.778 "adrfam": "ipv4", 00:16:11.778 "trsvcid": "$NVMF_PORT", 00:16:11.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.778 "hdgst": ${hdgst:-false}, 00:16:11.778 "ddgst": ${ddgst:-false} 00:16:11.778 }, 00:16:11.778 "method": "bdev_nvme_attach_controller" 00:16:11.778 } 00:16:11.778 EOF 00:16:11.778 )") 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:11.778 23:59:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:11.778 "params": { 00:16:11.778 "name": "Nvme0", 00:16:11.778 "trtype": "tcp", 00:16:11.778 "traddr": "10.0.0.2", 00:16:11.778 "adrfam": "ipv4", 00:16:11.778 "trsvcid": "4420", 00:16:11.778 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:11.778 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:11.778 "hdgst": false, 00:16:11.778 "ddgst": false 00:16:11.778 }, 00:16:11.778 "method": "bdev_nvme_attach_controller" 00:16:11.778 }' 00:16:11.778 [2024-07-15 23:59:46.160755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:11.778 [2024-07-15 23:59:46.160858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241228 ] 00:16:11.778 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.778 [2024-07-15 23:59:46.221554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.036 [2024-07-15 23:59:46.311534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.294 Running I/O for 1 seconds... 00:16:13.228 00:16:13.228 Latency(us) 00:16:13.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:13.228 Verification LBA range: start 0x0 length 0x400 00:16:13.228 Nvme0n1 : 1.01 1460.27 91.27 0.00 0.00 42977.92 6941.96 37671.06 00:16:13.228 =================================================================================================================== 00:16:13.228 Total : 1460.27 91.27 0.00 0.00 42977.92 6941.96 37671.06 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.487 rmmod nvme_tcp 00:16:13.487 rmmod nvme_fabrics 00:16:13.487 rmmod nvme_keyring 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1240976 ']' 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1240976 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1240976 ']' 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1240976 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1240976 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1240976' 00:16:13.487 killing process with pid 1240976 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1240976 00:16:13.487 23:59:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1240976 00:16:13.746 [2024-07-15 23:59:48.010848] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.746 23:59:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.736 23:59:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.736 23:59:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:15.736 00:16:15.736 real 0m8.084s 00:16:15.736 user 0m18.550s 00:16:15.736 sys 0m2.300s 00:16:15.736 23:59:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:15.736 23:59:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.736 ************************************ 00:16:15.736 END TEST nvmf_host_management 00:16:15.736 ************************************ 00:16:15.736 23:59:50 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:15.736 23:59:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:15.736 23:59:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:15.736 23:59:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.736 ************************************ 00:16:15.736 START TEST nvmf_lvol 00:16:15.736 ************************************ 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:15.736 * Looking for test storage... 00:16:15.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.736 23:59:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:17.634 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:17.634 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:17.634 Found net devices under 0000:08:00.0: cvl_0_0 00:16:17.634 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:17.635 Found net devices under 0000:08:00.1: cvl_0_1 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:16:17.635 00:16:17.635 --- 10.0.0.2 ping statistics --- 00:16:17.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.635 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:16:17.635 00:16:17.635 --- 10.0.0.1 ping statistics --- 00:16:17.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.635 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1242888 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1242888 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1242888 ']' 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:17.635 23:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.635 [2024-07-15 23:59:52.009450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:17.635 [2024-07-15 23:59:52.009562] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.635 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.635 [2024-07-15 23:59:52.075467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.893 [2024-07-15 23:59:52.162183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.893 [2024-07-15 23:59:52.162243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.893 [2024-07-15 23:59:52.162259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.893 [2024-07-15 23:59:52.162271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.893 [2024-07-15 23:59:52.162283] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.893 [2024-07-15 23:59:52.162364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.893 [2024-07-15 23:59:52.162450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.893 [2024-07-15 23:59:52.162483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.893 23:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:18.149 [2024-07-15 23:59:52.562057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.149 23:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.406 23:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:18.406 23:59:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.969 23:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:18.969 23:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:19.225 23:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:19.482 23:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=64c5d8c6-6ed6-4257-971c-9dda78d3c304 00:16:19.482 23:59:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 64c5d8c6-6ed6-4257-971c-9dda78d3c304 lvol 20 00:16:19.740 23:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=12b33fe1-0a83-478c-871a-d3d1aa23aa6f 00:16:19.740 23:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:19.998 23:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12b33fe1-0a83-478c-871a-d3d1aa23aa6f 00:16:20.256 23:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:20.513 [2024-07-15 23:59:54.795254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.513 23:59:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:20.771 23:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1243125 00:16:20.771 23:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:20.771 23:59:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:20.771 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.704 23:59:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 12b33fe1-0a83-478c-871a-d3d1aa23aa6f MY_SNAPSHOT 00:16:21.962 23:59:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eff65b6d-623c-4d43-8e94-a7029b7699fd 00:16:21.962 23:59:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 12b33fe1-0a83-478c-871a-d3d1aa23aa6f 30 00:16:22.527 23:59:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eff65b6d-623c-4d43-8e94-a7029b7699fd MY_CLONE 00:16:22.527 23:59:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d67fa726-d64e-4e4b-9212-85607785cd65 00:16:22.528 23:59:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d67fa726-d64e-4e4b-9212-85607785cd65 00:16:23.460 23:59:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1243125 00:16:31.564 Initializing NVMe Controllers 00:16:31.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:31.564 Controller IO queue size 128, less than required. 00:16:31.564 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:31.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:31.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:31.564 Initialization complete. Launching workers. 00:16:31.564 ======================================================== 00:16:31.564 Latency(us) 00:16:31.564 Device Information : IOPS MiB/s Average min max 00:16:31.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9732.80 38.02 13164.20 1877.70 89109.32 00:16:31.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9666.70 37.76 13253.45 2097.88 78330.20 00:16:31.564 ======================================================== 00:16:31.564 Total : 19399.50 75.78 13208.67 1877.70 89109.32 00:16:31.564 00:16:31.564 00:00:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:31.564 00:00:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12b33fe1-0a83-478c-871a-d3d1aa23aa6f 00:16:31.564 00:00:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64c5d8c6-6ed6-4257-971c-9dda78d3c304 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.822 rmmod nvme_tcp 00:16:31.822 rmmod nvme_fabrics 00:16:31.822 rmmod nvme_keyring 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1242888 ']' 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1242888 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1242888 ']' 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1242888 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1242888 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1242888' 00:16:31.822 killing process with pid 1242888 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1242888 00:16:31.822 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1242888 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.081 00:00:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.613 00:16:34.613 real 0m18.446s 00:16:34.613 user 1m4.405s 00:16:34.613 sys 0m5.208s 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:34.613 ************************************ 00:16:34.613 END TEST nvmf_lvol 00:16:34.613 ************************************ 00:16:34.613 00:00:08 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:34.613 00:00:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:34.613 00:00:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:34.613 00:00:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.613 ************************************ 00:16:34.613 START TEST nvmf_lvs_grow 00:16:34.613 ************************************ 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:34.613 * Looking for test storage... 00:16:34.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.613 00:00:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.614 00:00:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:35.987 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:35.987 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:35.987 Found net devices under 0000:08:00.0: cvl_0_0 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:35.987 Found net devices under 0000:08:00.1: cvl_0_1 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:35.987 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:35.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:16:35.988 00:16:35.988 --- 10.0.0.2 ping statistics --- 00:16:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.988 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:16:35.988 00:16:35.988 --- 10.0.0.1 ping statistics --- 00:16:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.988 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1246186 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1246186 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1246186 ']' 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:35.988 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:35.988 [2024-07-16 00:00:10.374441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:35.988 [2024-07-16 00:00:10.374533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.988 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.988 [2024-07-16 00:00:10.440601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.257 [2024-07-16 00:00:10.526891] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.257 [2024-07-16 00:00:10.526947] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.257 [2024-07-16 00:00:10.526962] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.257 [2024-07-16 00:00:10.526975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.257 [2024-07-16 00:00:10.526987] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.257 [2024-07-16 00:00:10.527017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.257 00:00:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.525 [2024-07-16 00:00:10.917699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.525 ************************************ 00:16:36.525 START TEST lvs_grow_clean 00:16:36.525 ************************************ 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.525 00:00:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:36.782 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:36.782 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c4672f9-aecc-4967-a605-7d797b590615 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:37.346 00:00:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c4672f9-aecc-4967-a605-7d797b590615 lvol 150 00:16:37.909 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5762ba42-50a9-49b5-bad0-14ee56dfe692 00:16:37.909 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:37.909 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:38.165 [2024-07-16 00:00:12.424890] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:38.165 [2024-07-16 00:00:12.424981] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:38.165 true 00:16:38.165 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:38.165 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:38.421 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:38.421 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:38.421 00:00:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5762ba42-50a9-49b5-bad0-14ee56dfe692 00:16:38.679 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:38.935 [2024-07-16 00:00:13.411899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.935 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1246515 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1246515 /var/tmp/bdevperf.sock 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1246515 ']' 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:39.192 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.192 [2024-07-16 00:00:13.704077] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:39.192 [2024-07-16 00:00:13.704187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246515 ] 00:16:39.448 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.448 [2024-07-16 00:00:13.759232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.448 [2024-07-16 00:00:13.846543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.448 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.448 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:16:39.448 00:00:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:40.013 Nvme0n1 00:16:40.013 00:00:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:40.271 [ 00:16:40.271 { 00:16:40.271 "name": "Nvme0n1", 00:16:40.271 "aliases": [ 00:16:40.271 "5762ba42-50a9-49b5-bad0-14ee56dfe692" 00:16:40.271 ], 00:16:40.271 "product_name": "NVMe disk", 00:16:40.271 "block_size": 4096, 00:16:40.271 "num_blocks": 38912, 00:16:40.271 "uuid": "5762ba42-50a9-49b5-bad0-14ee56dfe692", 00:16:40.271 "assigned_rate_limits": { 00:16:40.271 "rw_ios_per_sec": 0, 00:16:40.271 "rw_mbytes_per_sec": 0, 00:16:40.271 "r_mbytes_per_sec": 0, 00:16:40.271 "w_mbytes_per_sec": 0 00:16:40.271 }, 00:16:40.271 "claimed": false, 00:16:40.271 "zoned": false, 00:16:40.271 "supported_io_types": { 00:16:40.271 "read": true, 00:16:40.271 "write": true, 00:16:40.271 "unmap": true, 00:16:40.271 "write_zeroes": true, 00:16:40.271 "flush": true, 00:16:40.271 "reset": true, 00:16:40.271 "compare": true, 00:16:40.271 "compare_and_write": true, 00:16:40.271 "abort": true, 00:16:40.271 "nvme_admin": true, 00:16:40.271 "nvme_io": true 00:16:40.271 }, 00:16:40.271 "memory_domains": [ 00:16:40.271 { 00:16:40.271 "dma_device_id": "system", 00:16:40.271 "dma_device_type": 1 00:16:40.271 } 00:16:40.271 ], 00:16:40.271 "driver_specific": { 00:16:40.271 "nvme": [ 00:16:40.271 { 00:16:40.271 "trid": { 00:16:40.271 "trtype": "TCP", 00:16:40.271 "adrfam": "IPv4", 00:16:40.271 "traddr": "10.0.0.2", 00:16:40.271 "trsvcid": "4420", 00:16:40.271 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:40.271 }, 00:16:40.271 "ctrlr_data": { 00:16:40.271 "cntlid": 1, 00:16:40.271 "vendor_id": "0x8086", 00:16:40.271 "model_number": "SPDK bdev Controller", 00:16:40.271 "serial_number": "SPDK0", 00:16:40.271 "firmware_revision": "24.05.1", 00:16:40.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:40.271 "oacs": { 00:16:40.271 "security": 0, 00:16:40.271 "format": 0, 00:16:40.271 "firmware": 0, 00:16:40.271 "ns_manage": 0 00:16:40.271 }, 00:16:40.271 "multi_ctrlr": true, 00:16:40.271 "ana_reporting": false 00:16:40.271 }, 00:16:40.271 "vs": { 00:16:40.271 "nvme_version": "1.3" 00:16:40.271 }, 00:16:40.271 "ns_data": { 00:16:40.271 "id": 1, 00:16:40.271 "can_share": true 00:16:40.271 } 00:16:40.271 } 00:16:40.271 ], 00:16:40.271 "mp_policy": "active_passive" 00:16:40.271 } 00:16:40.271 } 00:16:40.271 ] 00:16:40.271 00:00:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1246615 00:16:40.271 00:00:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:40.271 00:00:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:40.529 Running I/O for 10 seconds... 00:16:41.462 Latency(us) 00:16:41.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.462 Nvme0n1 : 1.00 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:16:41.462 =================================================================================================================== 00:16:41.462 Total : 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:16:41.462 00:16:42.393 00:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:42.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.393 Nvme0n1 : 2.00 13716.50 53.58 0.00 0.00 0.00 0.00 0.00 00:16:42.393 =================================================================================================================== 00:16:42.393 Total : 13716.50 53.58 0.00 0.00 0.00 0.00 0.00 00:16:42.393 00:16:42.651 true 00:16:42.651 00:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:42.651 00:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:42.908 00:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:42.908 00:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:42.909 00:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1246615 00:16:43.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.474 Nvme0n1 : 3.00 13801.00 53.91 0.00 0.00 0.00 0.00 0.00 00:16:43.474 =================================================================================================================== 00:16:43.474 Total : 13801.00 53.91 0.00 0.00 0.00 0.00 0.00 00:16:43.474 00:16:44.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.405 Nvme0n1 : 4.00 13843.25 54.08 0.00 0.00 0.00 0.00 0.00 00:16:44.405 =================================================================================================================== 00:16:44.405 Total : 13843.25 54.08 0.00 0.00 0.00 0.00 0.00 00:16:44.405 00:16:45.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:45.775 Nvme0n1 : 5.00 13881.40 54.22 0.00 0.00 0.00 0.00 0.00 00:16:45.775 =================================================================================================================== 00:16:45.775 Total : 13881.40 54.22 0.00 0.00 0.00 0.00 0.00 00:16:45.775 00:16:46.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.707 Nvme0n1 : 6.00 13906.67 54.32 0.00 0.00 0.00 0.00 0.00 00:16:46.707 =================================================================================================================== 00:16:46.707 Total : 13906.67 54.32 0.00 0.00 0.00 0.00 0.00 00:16:46.707 00:16:47.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.639 Nvme0n1 : 7.00 13933.86 54.43 0.00 0.00 0.00 0.00 0.00 00:16:47.639 =================================================================================================================== 00:16:47.639 Total : 13933.86 54.43 0.00 0.00 0.00 0.00 0.00 00:16:47.639 00:16:48.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.573 Nvme0n1 : 8.00 13970.12 54.57 0.00 0.00 0.00 0.00 0.00 00:16:48.573 =================================================================================================================== 00:16:48.573 Total : 13970.12 54.57 0.00 0.00 0.00 0.00 0.00 00:16:48.573 00:16:49.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.505 Nvme0n1 : 9.00 13986.11 54.63 0.00 0.00 0.00 0.00 0.00 00:16:49.505 =================================================================================================================== 00:16:49.505 Total : 13986.11 54.63 0.00 0.00 0.00 0.00 0.00 00:16:49.505 00:16:50.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.440 Nvme0n1 : 10.00 14009.90 54.73 0.00 0.00 0.00 0.00 0.00 00:16:50.440 =================================================================================================================== 00:16:50.440 Total : 14009.90 54.73 0.00 0.00 0.00 0.00 0.00 00:16:50.440 00:16:50.440 00:16:50.440 Latency(us) 00:16:50.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.440 Nvme0n1 : 10.01 14010.06 54.73 0.00 0.00 9130.90 3737.98 17087.91 00:16:50.440 =================================================================================================================== 00:16:50.440 Total : 14010.06 54.73 0.00 0.00 9130.90 3737.98 17087.91 00:16:50.440 0 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1246515 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1246515 ']' 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1246515 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1246515 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1246515' 00:16:50.440 killing process with pid 1246515 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1246515 00:16:50.440 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.440 00:16:50.440 Latency(us) 00:16:50.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.440 =================================================================================================================== 00:16:50.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.440 00:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1246515 00:16:50.700 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:50.959 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:51.216 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:51.216 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:51.473 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:51.473 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:51.473 00:00:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:51.730 [2024-07-16 00:00:26.186941] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:51.730 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:52.035 request: 00:16:52.035 { 00:16:52.035 "uuid": "6c4672f9-aecc-4967-a605-7d797b590615", 00:16:52.035 "method": "bdev_lvol_get_lvstores", 00:16:52.035 "req_id": 1 00:16:52.035 } 00:16:52.035 Got JSON-RPC error response 00:16:52.035 response: 00:16:52.035 { 00:16:52.035 "code": -19, 00:16:52.035 "message": "No such device" 00:16:52.035 } 00:16:52.036 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:52.036 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.036 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.036 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.036 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.295 aio_bdev 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5762ba42-50a9-49b5-bad0-14ee56dfe692 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=5762ba42-50a9-49b5-bad0-14ee56dfe692 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:52.295 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:52.553 00:00:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5762ba42-50a9-49b5-bad0-14ee56dfe692 -t 2000 00:16:52.810 [ 00:16:52.810 { 00:16:52.810 "name": "5762ba42-50a9-49b5-bad0-14ee56dfe692", 00:16:52.810 "aliases": [ 00:16:52.810 "lvs/lvol" 00:16:52.810 ], 00:16:52.810 "product_name": "Logical Volume", 00:16:52.810 "block_size": 4096, 00:16:52.810 "num_blocks": 38912, 00:16:52.810 "uuid": "5762ba42-50a9-49b5-bad0-14ee56dfe692", 00:16:52.810 "assigned_rate_limits": { 00:16:52.810 "rw_ios_per_sec": 0, 00:16:52.810 "rw_mbytes_per_sec": 0, 00:16:52.810 "r_mbytes_per_sec": 0, 00:16:52.810 "w_mbytes_per_sec": 0 00:16:52.810 }, 00:16:52.810 "claimed": false, 00:16:52.810 "zoned": false, 00:16:52.810 "supported_io_types": { 00:16:52.810 "read": true, 00:16:52.810 "write": true, 00:16:52.810 "unmap": true, 00:16:52.810 "write_zeroes": true, 00:16:52.810 "flush": false, 00:16:52.810 "reset": true, 00:16:52.810 "compare": false, 00:16:52.810 "compare_and_write": false, 00:16:52.810 "abort": false, 00:16:52.810 "nvme_admin": false, 00:16:52.810 "nvme_io": false 00:16:52.810 }, 00:16:52.810 "driver_specific": { 00:16:52.810 "lvol": { 00:16:52.810 "lvol_store_uuid": "6c4672f9-aecc-4967-a605-7d797b590615", 00:16:52.810 "base_bdev": "aio_bdev", 00:16:52.810 "thin_provision": false, 00:16:52.810 "num_allocated_clusters": 38, 00:16:52.810 "snapshot": false, 00:16:52.810 "clone": false, 00:16:52.810 "esnap_clone": false 00:16:52.810 } 00:16:52.810 } 00:16:52.810 } 00:16:52.810 ] 00:16:52.810 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:16:52.810 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:52.810 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:53.066 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:53.066 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:53.066 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:53.323 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:53.323 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5762ba42-50a9-49b5-bad0-14ee56dfe692 00:16:53.580 00:00:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c4672f9-aecc-4967-a605-7d797b590615 00:16:53.836 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.093 00:16:54.093 real 0m17.509s 00:16:54.093 user 0m17.125s 00:16:54.093 sys 0m1.772s 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:54.093 ************************************ 00:16:54.093 END TEST lvs_grow_clean 00:16:54.093 ************************************ 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:54.093 ************************************ 00:16:54.093 START TEST lvs_grow_dirty 00:16:54.093 ************************************ 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.093 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.094 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.349 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:54.349 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:54.606 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=31607074-a431-4888-9843-7fa681ee1c73 00:16:54.606 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:16:54.606 00:00:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:54.864 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:54.864 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:54.864 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 31607074-a431-4888-9843-7fa681ee1c73 lvol 150 00:16:55.121 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:16:55.121 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.121 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:55.379 [2024-07-16 00:00:29.852749] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:55.379 [2024-07-16 00:00:29.852827] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:55.379 true 00:16:55.379 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:55.379 00:00:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:16:55.942 00:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:55.942 00:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:55.942 00:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:16:56.198 00:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.456 [2024-07-16 00:00:30.855781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.456 00:00:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1248093 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1248093 /var/tmp/bdevperf.sock 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1248093 ']' 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.714 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:56.714 [2024-07-16 00:00:31.163239] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:56.714 [2024-07-16 00:00:31.163342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248093 ] 00:16:56.714 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.714 [2024-07-16 00:00:31.216841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.992 [2024-07-16 00:00:31.305391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.992 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.992 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:16:56.992 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:57.249 Nvme0n1 00:16:57.249 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:57.506 [ 00:16:57.506 { 00:16:57.506 "name": "Nvme0n1", 00:16:57.506 "aliases": [ 00:16:57.506 "ad485c3d-9900-4786-8f4c-eff4bb304fb3" 00:16:57.506 ], 00:16:57.506 "product_name": "NVMe disk", 00:16:57.506 "block_size": 4096, 00:16:57.506 "num_blocks": 38912, 00:16:57.506 "uuid": "ad485c3d-9900-4786-8f4c-eff4bb304fb3", 00:16:57.506 "assigned_rate_limits": { 00:16:57.506 "rw_ios_per_sec": 0, 00:16:57.506 "rw_mbytes_per_sec": 0, 00:16:57.506 "r_mbytes_per_sec": 0, 00:16:57.506 "w_mbytes_per_sec": 0 00:16:57.506 }, 00:16:57.506 "claimed": false, 00:16:57.506 "zoned": false, 00:16:57.506 "supported_io_types": { 00:16:57.506 "read": true, 00:16:57.506 "write": true, 00:16:57.506 "unmap": true, 00:16:57.506 "write_zeroes": true, 00:16:57.506 "flush": true, 00:16:57.506 "reset": true, 00:16:57.507 "compare": true, 00:16:57.507 "compare_and_write": true, 00:16:57.507 "abort": true, 00:16:57.507 "nvme_admin": true, 00:16:57.507 "nvme_io": true 00:16:57.507 }, 00:16:57.507 "memory_domains": [ 00:16:57.507 { 00:16:57.507 "dma_device_id": "system", 00:16:57.507 "dma_device_type": 1 00:16:57.507 } 00:16:57.507 ], 00:16:57.507 "driver_specific": { 00:16:57.507 "nvme": [ 00:16:57.507 { 00:16:57.507 "trid": { 00:16:57.507 "trtype": "TCP", 00:16:57.507 "adrfam": "IPv4", 00:16:57.507 "traddr": "10.0.0.2", 00:16:57.507 "trsvcid": "4420", 00:16:57.507 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:57.507 }, 00:16:57.507 "ctrlr_data": { 00:16:57.507 "cntlid": 1, 00:16:57.507 "vendor_id": "0x8086", 00:16:57.507 "model_number": "SPDK bdev Controller", 00:16:57.507 "serial_number": "SPDK0", 00:16:57.507 "firmware_revision": "24.05.1", 00:16:57.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:57.507 "oacs": { 00:16:57.507 "security": 0, 00:16:57.507 "format": 0, 00:16:57.507 "firmware": 0, 00:16:57.507 "ns_manage": 0 00:16:57.507 }, 00:16:57.507 "multi_ctrlr": true, 00:16:57.507 "ana_reporting": false 00:16:57.507 }, 00:16:57.507 "vs": { 00:16:57.507 "nvme_version": "1.3" 00:16:57.507 }, 00:16:57.507 "ns_data": { 00:16:57.507 "id": 1, 00:16:57.507 "can_share": true 00:16:57.507 } 00:16:57.507 } 00:16:57.507 ], 00:16:57.507 "mp_policy": "active_passive" 00:16:57.507 } 00:16:57.507 } 00:16:57.507 ] 00:16:57.507 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1248192 00:16:57.507 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:57.507 00:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.763 Running I/O for 10 seconds... 00:16:58.693 Latency(us) 00:16:58.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.693 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:16:58.693 =================================================================================================================== 00:16:58.693 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:16:58.693 00:16:59.625 00:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 31607074-a431-4888-9843-7fa681ee1c73 00:16:59.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.625 Nvme0n1 : 2.00 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:16:59.625 =================================================================================================================== 00:16:59.625 Total : 13970.50 54.57 0.00 0.00 0.00 0.00 0.00 00:16:59.625 00:16:59.882 true 00:16:59.882 00:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:16:59.882 00:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:00.139 00:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:00.139 00:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:00.139 00:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1248192 00:17:00.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.705 Nvme0n1 : 3.00 14034.33 54.82 0.00 0.00 0.00 0.00 0.00 00:17:00.705 =================================================================================================================== 00:17:00.705 Total : 14034.33 54.82 0.00 0.00 0.00 0.00 0.00 00:17:00.705 00:17:01.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.639 Nvme0n1 : 4.00 14116.50 55.14 0.00 0.00 0.00 0.00 0.00 00:17:01.639 =================================================================================================================== 00:17:01.639 Total : 14116.50 55.14 0.00 0.00 0.00 0.00 0.00 00:17:01.639 00:17:02.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.572 Nvme0n1 : 5.00 14163.40 55.33 0.00 0.00 0.00 0.00 0.00 00:17:02.572 =================================================================================================================== 00:17:02.572 Total : 14163.40 55.33 0.00 0.00 0.00 0.00 0.00 00:17:02.572 00:17:03.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.954 Nvme0n1 : 6.00 14194.67 55.45 0.00 0.00 0.00 0.00 0.00 00:17:03.954 =================================================================================================================== 00:17:03.954 Total : 14194.67 55.45 0.00 0.00 0.00 0.00 0.00 00:17:03.954 00:17:04.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.888 Nvme0n1 : 7.00 14235.14 55.61 0.00 0.00 0.00 0.00 0.00 00:17:04.888 =================================================================================================================== 00:17:04.888 Total : 14235.14 55.61 0.00 0.00 0.00 0.00 0.00 00:17:04.888 00:17:05.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.821 Nvme0n1 : 8.00 14249.62 55.66 0.00 0.00 0.00 0.00 0.00 00:17:05.821 =================================================================================================================== 00:17:05.821 Total : 14249.62 55.66 0.00 0.00 0.00 0.00 0.00 00:17:05.821 00:17:06.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.751 Nvme0n1 : 9.00 14275.00 55.76 0.00 0.00 0.00 0.00 0.00 00:17:06.751 =================================================================================================================== 00:17:06.751 Total : 14275.00 55.76 0.00 0.00 0.00 0.00 0.00 00:17:06.751 00:17:07.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.719 Nvme0n1 : 10.00 14302.40 55.87 0.00 0.00 0.00 0.00 0.00 00:17:07.719 =================================================================================================================== 00:17:07.719 Total : 14302.40 55.87 0.00 0.00 0.00 0.00 0.00 00:17:07.719 00:17:07.719 00:17:07.719 Latency(us) 00:17:07.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.719 Nvme0n1 : 10.01 14306.92 55.89 0.00 0.00 8941.62 5339.97 17476.27 00:17:07.719 =================================================================================================================== 00:17:07.719 Total : 14306.92 55.89 0.00 0.00 8941.62 5339.97 17476.27 00:17:07.719 0 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1248093 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1248093 ']' 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1248093 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1248093 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1248093' 00:17:07.719 killing process with pid 1248093 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1248093 00:17:07.719 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.719 00:17:07.719 Latency(us) 00:17:07.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.719 =================================================================================================================== 00:17:07.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:07.719 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1248093 00:17:07.977 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.235 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.492 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:08.492 00:00:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1246186 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1246186 00:17:08.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1246186 Killed "${NVMF_APP[@]}" "$@" 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1249154 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1249154 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1249154 ']' 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:08.750 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.007 [2024-07-16 00:00:43.291176] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:09.007 [2024-07-16 00:00:43.291284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.007 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.007 [2024-07-16 00:00:43.358468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.007 [2024-07-16 00:00:43.444219] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.007 [2024-07-16 00:00:43.444265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.007 [2024-07-16 00:00:43.444281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.007 [2024-07-16 00:00:43.444295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.007 [2024-07-16 00:00:43.444307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.007 [2024-07-16 00:00:43.444336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.264 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.265 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:09.522 [2024-07-16 00:00:43.845668] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:09.523 [2024-07-16 00:00:43.845811] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:09.523 [2024-07-16 00:00:43.845867] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:09.523 00:00:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:09.780 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad485c3d-9900-4786-8f4c-eff4bb304fb3 -t 2000 00:17:10.038 [ 00:17:10.038 { 00:17:10.038 "name": "ad485c3d-9900-4786-8f4c-eff4bb304fb3", 00:17:10.038 "aliases": [ 00:17:10.038 "lvs/lvol" 00:17:10.038 ], 00:17:10.038 "product_name": "Logical Volume", 00:17:10.038 "block_size": 4096, 00:17:10.038 "num_blocks": 38912, 00:17:10.038 "uuid": "ad485c3d-9900-4786-8f4c-eff4bb304fb3", 00:17:10.038 "assigned_rate_limits": { 00:17:10.038 "rw_ios_per_sec": 0, 00:17:10.038 "rw_mbytes_per_sec": 0, 00:17:10.038 "r_mbytes_per_sec": 0, 00:17:10.038 "w_mbytes_per_sec": 0 00:17:10.038 }, 00:17:10.038 "claimed": false, 00:17:10.038 "zoned": false, 00:17:10.038 "supported_io_types": { 00:17:10.038 "read": true, 00:17:10.038 "write": true, 00:17:10.038 "unmap": true, 00:17:10.038 "write_zeroes": true, 00:17:10.038 "flush": false, 00:17:10.038 "reset": true, 00:17:10.038 "compare": false, 00:17:10.038 "compare_and_write": false, 00:17:10.038 "abort": false, 00:17:10.038 "nvme_admin": false, 00:17:10.038 "nvme_io": false 00:17:10.038 }, 00:17:10.038 "driver_specific": { 00:17:10.038 "lvol": { 00:17:10.038 "lvol_store_uuid": "31607074-a431-4888-9843-7fa681ee1c73", 00:17:10.038 "base_bdev": "aio_bdev", 00:17:10.038 "thin_provision": false, 00:17:10.038 "num_allocated_clusters": 38, 00:17:10.038 "snapshot": false, 00:17:10.038 "clone": false, 00:17:10.038 "esnap_clone": false 00:17:10.038 } 00:17:10.038 } 00:17:10.038 } 00:17:10.038 ] 00:17:10.038 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:10.038 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:10.038 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:10.296 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:10.296 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:10.296 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:10.553 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:10.553 00:00:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.811 [2024-07-16 00:00:45.114689] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:10.811 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:11.068 request: 00:17:11.068 { 00:17:11.068 "uuid": "31607074-a431-4888-9843-7fa681ee1c73", 00:17:11.068 "method": "bdev_lvol_get_lvstores", 00:17:11.068 "req_id": 1 00:17:11.068 } 00:17:11.068 Got JSON-RPC error response 00:17:11.068 response: 00:17:11.068 { 00:17:11.068 "code": -19, 00:17:11.068 "message": "No such device" 00:17:11.068 } 00:17:11.068 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:11.068 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.068 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.068 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.068 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:11.325 aio_bdev 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:11.325 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:11.582 00:00:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ad485c3d-9900-4786-8f4c-eff4bb304fb3 -t 2000 00:17:11.840 [ 00:17:11.840 { 00:17:11.840 "name": "ad485c3d-9900-4786-8f4c-eff4bb304fb3", 00:17:11.840 "aliases": [ 00:17:11.840 "lvs/lvol" 00:17:11.840 ], 00:17:11.840 "product_name": "Logical Volume", 00:17:11.840 "block_size": 4096, 00:17:11.840 "num_blocks": 38912, 00:17:11.841 "uuid": "ad485c3d-9900-4786-8f4c-eff4bb304fb3", 00:17:11.841 "assigned_rate_limits": { 00:17:11.841 "rw_ios_per_sec": 0, 00:17:11.841 "rw_mbytes_per_sec": 0, 00:17:11.841 "r_mbytes_per_sec": 0, 00:17:11.841 "w_mbytes_per_sec": 0 00:17:11.841 }, 00:17:11.841 "claimed": false, 00:17:11.841 "zoned": false, 00:17:11.841 "supported_io_types": { 00:17:11.841 "read": true, 00:17:11.841 "write": true, 00:17:11.841 "unmap": true, 00:17:11.841 "write_zeroes": true, 00:17:11.841 "flush": false, 00:17:11.841 "reset": true, 00:17:11.841 "compare": false, 00:17:11.841 "compare_and_write": false, 00:17:11.841 "abort": false, 00:17:11.841 "nvme_admin": false, 00:17:11.841 "nvme_io": false 00:17:11.841 }, 00:17:11.841 "driver_specific": { 00:17:11.841 "lvol": { 00:17:11.841 "lvol_store_uuid": "31607074-a431-4888-9843-7fa681ee1c73", 00:17:11.841 "base_bdev": "aio_bdev", 00:17:11.841 "thin_provision": false, 00:17:11.841 "num_allocated_clusters": 38, 00:17:11.841 "snapshot": false, 00:17:11.841 "clone": false, 00:17:11.841 "esnap_clone": false 00:17:11.841 } 00:17:11.841 } 00:17:11.841 } 00:17:11.841 ] 00:17:11.841 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:11.841 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:11.841 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:12.099 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:12.099 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:12.099 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:12.357 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:12.357 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ad485c3d-9900-4786-8f4c-eff4bb304fb3 00:17:12.614 00:00:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 31607074-a431-4888-9843-7fa681ee1c73 00:17:12.872 00:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.140 00:17:13.140 real 0m19.098s 00:17:13.140 user 0m48.677s 00:17:13.140 sys 0m4.272s 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:13.140 ************************************ 00:17:13.140 END TEST lvs_grow_dirty 00:17:13.140 ************************************ 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:13.140 nvmf_trace.0 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.140 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.140 rmmod nvme_tcp 00:17:13.398 rmmod nvme_fabrics 00:17:13.398 rmmod nvme_keyring 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1249154 ']' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1249154 ']' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1249154' 00:17:13.398 killing process with pid 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1249154 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.398 00:00:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.984 00:00:49 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.984 00:17:15.984 real 0m41.339s 00:17:15.984 user 1m11.378s 00:17:15.984 sys 0m7.593s 00:17:15.984 00:00:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:15.984 00:00:49 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 ************************************ 00:17:15.984 END TEST nvmf_lvs_grow 00:17:15.984 ************************************ 00:17:15.984 00:00:49 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:15.984 00:00:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:15.984 00:00:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:15.984 00:00:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 ************************************ 00:17:15.984 START TEST nvmf_bdev_io_wait 00:17:15.984 ************************************ 00:17:15.984 00:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:15.984 * Looking for test storage... 00:17:15.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.984 00:00:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:17.364 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:17.364 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:17.364 Found net devices under 0000:08:00.0: cvl_0_0 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:17.364 Found net devices under 0000:08:00.1: cvl_0_1 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:17:17.364 00:17:17.364 --- 10.0.0.2 ping statistics --- 00:17:17.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.364 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:17:17.364 00:17:17.364 --- 10.0.0.1 ping statistics --- 00:17:17.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.364 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1251072 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1251072 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1251072 ']' 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.364 00:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.365 [2024-07-16 00:00:51.813554] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.365 [2024-07-16 00:00:51.813658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.365 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.622 [2024-07-16 00:00:51.879983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.622 [2024-07-16 00:00:51.972455] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.622 [2024-07-16 00:00:51.972516] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.622 [2024-07-16 00:00:51.972532] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.622 [2024-07-16 00:00:51.972547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.622 [2024-07-16 00:00:51.972558] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.622 [2024-07-16 00:00:51.972635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.622 [2024-07-16 00:00:51.972957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.622 [2024-07-16 00:00:51.974159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.622 [2024-07-16 00:00:51.974175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.622 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 [2024-07-16 00:00:52.165679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 Malloc0 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 [2024-07-16 00:00:52.231597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1251144 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1251147 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.880 { 00:17:17.880 "params": { 00:17:17.880 "name": "Nvme$subsystem", 00:17:17.880 "trtype": "$TEST_TRANSPORT", 00:17:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.880 "adrfam": "ipv4", 00:17:17.880 "trsvcid": "$NVMF_PORT", 00:17:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.880 "hdgst": ${hdgst:-false}, 00:17:17.880 "ddgst": ${ddgst:-false} 00:17:17.880 }, 00:17:17.880 "method": "bdev_nvme_attach_controller" 00:17:17.880 } 00:17:17.880 EOF 00:17:17.880 )") 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1251149 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.880 { 00:17:17.880 "params": { 00:17:17.880 "name": "Nvme$subsystem", 00:17:17.880 "trtype": "$TEST_TRANSPORT", 00:17:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.880 "adrfam": "ipv4", 00:17:17.880 "trsvcid": "$NVMF_PORT", 00:17:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.880 "hdgst": ${hdgst:-false}, 00:17:17.880 "ddgst": ${ddgst:-false} 00:17:17.880 }, 00:17:17.880 "method": "bdev_nvme_attach_controller" 00:17:17.880 } 00:17:17.880 EOF 00:17:17.880 )") 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1251152 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.880 { 00:17:17.880 "params": { 00:17:17.880 "name": "Nvme$subsystem", 00:17:17.880 "trtype": "$TEST_TRANSPORT", 00:17:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.880 "adrfam": "ipv4", 00:17:17.880 "trsvcid": "$NVMF_PORT", 00:17:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.880 "hdgst": ${hdgst:-false}, 00:17:17.880 "ddgst": ${ddgst:-false} 00:17:17.880 }, 00:17:17.880 "method": "bdev_nvme_attach_controller" 00:17:17.880 } 00:17:17.880 EOF 00:17:17.880 )") 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.880 { 00:17:17.880 "params": { 00:17:17.880 "name": "Nvme$subsystem", 00:17:17.880 "trtype": "$TEST_TRANSPORT", 00:17:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.880 "adrfam": "ipv4", 00:17:17.880 "trsvcid": "$NVMF_PORT", 00:17:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.880 "hdgst": ${hdgst:-false}, 00:17:17.880 "ddgst": ${ddgst:-false} 00:17:17.880 }, 00:17:17.880 "method": "bdev_nvme_attach_controller" 00:17:17.880 } 00:17:17.880 EOF 00:17:17.880 )") 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1251144 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:17.880 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.880 "params": { 00:17:17.880 "name": "Nvme1", 00:17:17.880 "trtype": "tcp", 00:17:17.880 "traddr": "10.0.0.2", 00:17:17.880 "adrfam": "ipv4", 00:17:17.880 "trsvcid": "4420", 00:17:17.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.881 "hdgst": false, 00:17:17.881 "ddgst": false 00:17:17.881 }, 00:17:17.881 "method": "bdev_nvme_attach_controller" 00:17:17.881 }' 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.881 "params": { 00:17:17.881 "name": "Nvme1", 00:17:17.881 "trtype": "tcp", 00:17:17.881 "traddr": "10.0.0.2", 00:17:17.881 "adrfam": "ipv4", 00:17:17.881 "trsvcid": "4420", 00:17:17.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.881 "hdgst": false, 00:17:17.881 "ddgst": false 00:17:17.881 }, 00:17:17.881 "method": "bdev_nvme_attach_controller" 00:17:17.881 }' 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.881 "params": { 00:17:17.881 "name": "Nvme1", 00:17:17.881 "trtype": "tcp", 00:17:17.881 "traddr": "10.0.0.2", 00:17:17.881 "adrfam": "ipv4", 00:17:17.881 "trsvcid": "4420", 00:17:17.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.881 "hdgst": false, 00:17:17.881 "ddgst": false 00:17:17.881 }, 00:17:17.881 "method": "bdev_nvme_attach_controller" 00:17:17.881 }' 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:17.881 00:00:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.881 "params": { 00:17:17.881 "name": "Nvme1", 00:17:17.881 "trtype": "tcp", 00:17:17.881 "traddr": "10.0.0.2", 00:17:17.881 "adrfam": "ipv4", 00:17:17.881 "trsvcid": "4420", 00:17:17.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.881 "hdgst": false, 00:17:17.881 "ddgst": false 00:17:17.881 }, 00:17:17.881 "method": "bdev_nvme_attach_controller" 00:17:17.881 }' 00:17:17.881 [2024-07-16 00:00:52.280750] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.881 [2024-07-16 00:00:52.280751] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.881 [2024-07-16 00:00:52.280750] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.881 [2024-07-16 00:00:52.280852] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:00:52.280853] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-16 00:00:52.280853] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:17.881 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:17.881 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:17.881 [2024-07-16 00:00:52.282442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:17.881 [2024-07-16 00:00:52.282525] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:17.881 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.138 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.138 [2024-07-16 00:00:52.421989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.138 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.138 [2024-07-16 00:00:52.489240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.138 [2024-07-16 00:00:52.491572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.138 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.138 [2024-07-16 00:00:52.558949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:18.138 [2024-07-16 00:00:52.575045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.138 [2024-07-16 00:00:52.633755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.138 [2024-07-16 00:00:52.647934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.396 [2024-07-16 00:00:52.701041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.396 Running I/O for 1 seconds... 00:17:18.396 Running I/O for 1 seconds... 00:17:18.655 Running I/O for 1 seconds... 00:17:18.655 Running I/O for 1 seconds... 00:17:19.596 00:17:19.596 Latency(us) 00:17:19.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.596 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:19.596 Nvme1n1 : 1.01 9440.62 36.88 0.00 0.00 13498.50 7864.32 21456.97 00:17:19.596 =================================================================================================================== 00:17:19.596 Total : 9440.62 36.88 0.00 0.00 13498.50 7864.32 21456.97 00:17:19.596 00:17:19.596 Latency(us) 00:17:19.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.596 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:19.596 Nvme1n1 : 1.01 8722.07 34.07 0.00 0.00 14611.05 7475.96 24758.04 00:17:19.596 =================================================================================================================== 00:17:19.596 Total : 8722.07 34.07 0.00 0.00 14611.05 7475.96 24758.04 00:17:19.596 00:00:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1251147 00:17:19.596 00:17:19.596 Latency(us) 00:17:19.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.596 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:19.596 Nvme1n1 : 1.01 7320.77 28.60 0.00 0.00 17396.44 8883.77 28544.57 00:17:19.596 =================================================================================================================== 00:17:19.596 Total : 7320.77 28.60 0.00 0.00 17396.44 8883.77 28544.57 00:17:19.596 00:17:19.596 Latency(us) 00:17:19.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.596 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:19.596 Nvme1n1 : 1.00 148077.51 578.43 0.00 0.00 860.92 314.03 1080.13 00:17:19.596 =================================================================================================================== 00:17:19.596 Total : 148077.51 578.43 0.00 0.00 860.92 314.03 1080.13 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1251149 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1251152 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.855 rmmod nvme_tcp 00:17:19.855 rmmod nvme_fabrics 00:17:19.855 rmmod nvme_keyring 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1251072 ']' 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1251072 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1251072 ']' 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1251072 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1251072 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1251072' 00:17:19.855 killing process with pid 1251072 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1251072 00:17:19.855 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1251072 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.124 00:00:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.073 00:00:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.073 00:17:22.073 real 0m6.517s 00:17:22.073 user 0m15.206s 00:17:22.073 sys 0m3.182s 00:17:22.073 00:00:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.073 00:00:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:22.073 ************************************ 00:17:22.073 END TEST nvmf_bdev_io_wait 00:17:22.073 ************************************ 00:17:22.073 00:00:56 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:22.073 00:00:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:22.073 00:00:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.073 00:00:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.073 ************************************ 00:17:22.073 START TEST nvmf_queue_depth 00:17:22.073 ************************************ 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:22.073 * Looking for test storage... 00:17:22.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.073 00:00:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:23.971 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:23.971 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:23.971 Found net devices under 0000:08:00.0: cvl_0_0 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.971 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:23.972 Found net devices under 0000:08:00.1: cvl_0_1 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:17:23.972 00:17:23.972 --- 10.0.0.2 ping statistics --- 00:17:23.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.972 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:17:23.972 00:17:23.972 --- 10.0.0.1 ping statistics --- 00:17:23.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.972 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1252782 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1252782 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1252782 ']' 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:23.972 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:23.972 [2024-07-16 00:00:58.405955] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:23.972 [2024-07-16 00:00:58.406044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.972 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.972 [2024-07-16 00:00:58.469479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.229 [2024-07-16 00:00:58.555773] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.229 [2024-07-16 00:00:58.555830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.229 [2024-07-16 00:00:58.555846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.229 [2024-07-16 00:00:58.555859] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.229 [2024-07-16 00:00:58.555872] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.229 [2024-07-16 00:00:58.555899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 [2024-07-16 00:00:58.681921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 Malloc0 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.229 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 [2024-07-16 00:00:58.739580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1252805 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1252805 /var/tmp/bdevperf.sock 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1252805 ']' 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:24.486 00:00:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.486 [2024-07-16 00:00:58.788033] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:24.486 [2024-07-16 00:00:58.788130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252805 ] 00:17:24.486 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.486 [2024-07-16 00:00:58.848641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.486 [2024-07-16 00:00:58.935984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:24.744 NVMe0n1 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.744 00:00:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.001 Running I/O for 10 seconds... 00:17:34.964 00:17:34.964 Latency(us) 00:17:34.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.964 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:34.964 Verification LBA range: start 0x0 length 0x4000 00:17:34.964 NVMe0n1 : 10.09 7870.89 30.75 0.00 0.00 129366.74 28350.39 84274.44 00:17:34.964 =================================================================================================================== 00:17:34.964 Total : 7870.89 30.75 0.00 0.00 129366.74 28350.39 84274.44 00:17:34.964 0 00:17:34.964 00:01:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1252805 00:17:34.964 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1252805 ']' 00:17:34.964 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1252805 00:17:34.964 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1252805 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1252805' 00:17:34.965 killing process with pid 1252805 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1252805 00:17:34.965 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.965 00:17:34.965 Latency(us) 00:17:34.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.965 =================================================================================================================== 00:17:34.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.965 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1252805 00:17:35.223 00:01:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:35.223 00:01:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:35.223 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.223 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.224 rmmod nvme_tcp 00:17:35.224 rmmod nvme_fabrics 00:17:35.224 rmmod nvme_keyring 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1252782 ']' 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1252782 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1252782 ']' 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1252782 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1252782 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1252782' 00:17:35.224 killing process with pid 1252782 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1252782 00:17:35.224 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1252782 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.498 00:01:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.398 00:01:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.398 00:17:37.398 real 0m15.400s 00:17:37.398 user 0m21.326s 00:17:37.398 sys 0m3.046s 00:17:37.398 00:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.398 00:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:37.398 ************************************ 00:17:37.398 END TEST nvmf_queue_depth 00:17:37.398 ************************************ 00:17:37.655 00:01:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:37.655 00:01:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:37.655 00:01:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.655 00:01:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:37.655 ************************************ 00:17:37.655 START TEST nvmf_target_multipath 00:17:37.655 ************************************ 00:17:37.655 00:01:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:37.655 * Looking for test storage... 00:17:37.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.656 00:01:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.656 00:01:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:39.558 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:39.558 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:39.558 Found net devices under 0000:08:00.0: cvl_0_0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:39.558 Found net devices under 0000:08:00.1: cvl_0_1 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:17:39.558 00:17:39.558 --- 10.0.0.2 ping statistics --- 00:17:39.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.558 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:39.558 00:17:39.558 --- 10.0.0.1 ping statistics --- 00:17:39.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.558 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:39.558 only one NIC for nvmf test 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.558 rmmod nvme_tcp 00:17:39.558 rmmod nvme_fabrics 00:17:39.558 rmmod nvme_keyring 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.558 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.559 00:01:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.466 00:17:41.466 real 0m3.943s 00:17:41.466 user 0m0.680s 00:17:41.466 sys 0m1.251s 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:41.466 00:01:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.466 ************************************ 00:17:41.466 END TEST nvmf_target_multipath 00:17:41.466 ************************************ 00:17:41.466 00:01:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:41.466 00:01:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:41.466 00:01:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:41.466 00:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.466 ************************************ 00:17:41.466 START TEST nvmf_zcopy 00:17:41.466 ************************************ 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:41.466 * Looking for test storage... 00:17:41.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.466 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.467 00:01:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.726 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.726 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.726 00:01:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.726 00:01:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:43.102 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:43.102 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:43.102 Found net devices under 0000:08:00.0: cvl_0_0 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:43.102 Found net devices under 0000:08:00.1: cvl_0_1 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:43.102 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.103 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:17:43.360 00:17:43.360 --- 10.0.0.2 ping statistics --- 00:17:43.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.360 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:43.360 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:17:43.360 00:17:43.360 --- 10.0.0.1 ping statistics --- 00:17:43.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.361 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1256691 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1256691 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1256691 ']' 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.361 00:01:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.361 [2024-07-16 00:01:17.781724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.361 [2024-07-16 00:01:17.781814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.361 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.361 [2024-07-16 00:01:17.846287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.619 [2024-07-16 00:01:17.933119] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.619 [2024-07-16 00:01:17.933194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.619 [2024-07-16 00:01:17.933211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.619 [2024-07-16 00:01:17.933225] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.619 [2024-07-16 00:01:17.933237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.619 [2024-07-16 00:01:17.933274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 [2024-07-16 00:01:18.064824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 [2024-07-16 00:01:18.080962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 malloc0 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:43.619 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:43.619 { 00:17:43.619 "params": { 00:17:43.619 "name": "Nvme$subsystem", 00:17:43.619 "trtype": "$TEST_TRANSPORT", 00:17:43.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.619 "adrfam": "ipv4", 00:17:43.619 "trsvcid": "$NVMF_PORT", 00:17:43.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.620 "hdgst": ${hdgst:-false}, 00:17:43.620 "ddgst": ${ddgst:-false} 00:17:43.620 }, 00:17:43.620 "method": "bdev_nvme_attach_controller" 00:17:43.620 } 00:17:43.620 EOF 00:17:43.620 )") 00:17:43.620 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:43.620 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:43.620 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:43.620 00:01:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:43.620 "params": { 00:17:43.620 "name": "Nvme1", 00:17:43.620 "trtype": "tcp", 00:17:43.620 "traddr": "10.0.0.2", 00:17:43.620 "adrfam": "ipv4", 00:17:43.620 "trsvcid": "4420", 00:17:43.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.620 "hdgst": false, 00:17:43.620 "ddgst": false 00:17:43.620 }, 00:17:43.620 "method": "bdev_nvme_attach_controller" 00:17:43.620 }' 00:17:43.878 [2024-07-16 00:01:18.161830] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:43.878 [2024-07-16 00:01:18.161925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256718 ] 00:17:43.878 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.878 [2024-07-16 00:01:18.222407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.878 [2024-07-16 00:01:18.313108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.135 Running I/O for 10 seconds... 00:17:54.179 00:17:54.179 Latency(us) 00:17:54.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.179 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:54.179 Verification LBA range: start 0x0 length 0x1000 00:17:54.179 Nvme1n1 : 10.06 5325.82 41.61 0.00 0.00 23862.29 4466.16 47380.10 00:17:54.179 =================================================================================================================== 00:17:54.179 Total : 5325.82 41.61 0.00 0.00 23862.29 4466.16 47380.10 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1257595 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.437 { 00:17:54.437 "params": { 00:17:54.437 "name": "Nvme$subsystem", 00:17:54.437 "trtype": "$TEST_TRANSPORT", 00:17:54.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.437 "adrfam": "ipv4", 00:17:54.437 "trsvcid": "$NVMF_PORT", 00:17:54.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.437 "hdgst": ${hdgst:-false}, 00:17:54.437 "ddgst": ${ddgst:-false} 00:17:54.437 }, 00:17:54.437 "method": "bdev_nvme_attach_controller" 00:17:54.437 } 00:17:54.437 EOF 00:17:54.437 )") 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:54.437 [2024-07-16 00:01:28.735897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.735938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:54.437 00:01:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.437 "params": { 00:17:54.437 "name": "Nvme1", 00:17:54.437 "trtype": "tcp", 00:17:54.437 "traddr": "10.0.0.2", 00:17:54.437 "adrfam": "ipv4", 00:17:54.437 "trsvcid": "4420", 00:17:54.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.437 "hdgst": false, 00:17:54.437 "ddgst": false 00:17:54.437 }, 00:17:54.437 "method": "bdev_nvme_attach_controller" 00:17:54.437 }' 00:17:54.437 [2024-07-16 00:01:28.743859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.743885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.751879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.751902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.759900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.759924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.767922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.767945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.775944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.775968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.779720] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:54.437 [2024-07-16 00:01:28.779811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257595 ] 00:17:54.437 [2024-07-16 00:01:28.783966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.783990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.791990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.792012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.800014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.800037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.808036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.808058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.437 [2024-07-16 00:01:28.816058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.816081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.824079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.824102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.832117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.832147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.840027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.437 [2024-07-16 00:01:28.840123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.840151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.848227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.848272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.856238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.856280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.864205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.864230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.872241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.872270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.880259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.880286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.888322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.888364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.896351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.896394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.904318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.904344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.912354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.912383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.920377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.920425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.928392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.928418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.930468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.437 [2024-07-16 00:01:28.936403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.936426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.437 [2024-07-16 00:01:28.944476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.437 [2024-07-16 00:01:28.944517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.952528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.952577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.960542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.960593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.968554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.968598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.976575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.976620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.984590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.984634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:28.992598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:28.992635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.000647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.000692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.008667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.008711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.016617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.016640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.024659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.024689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.032693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.032721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.040701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.040727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.048732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.048758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.056750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.056777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.064779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.064806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.072804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.072830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.080834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.080858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.088860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.088889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.096860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.096884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 Running I/O for 5 seconds... 00:17:54.695 [2024-07-16 00:01:29.108346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.108375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.119204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.119235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.132567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.132597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.695 [2024-07-16 00:01:29.144371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.695 [2024-07-16 00:01:29.144401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.696 [2024-07-16 00:01:29.156550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.696 [2024-07-16 00:01:29.156579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.696 [2024-07-16 00:01:29.168835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.696 [2024-07-16 00:01:29.168864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.696 [2024-07-16 00:01:29.181486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.696 [2024-07-16 00:01:29.181515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.696 [2024-07-16 00:01:29.193813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.696 [2024-07-16 00:01:29.193842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.696 [2024-07-16 00:01:29.205864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.696 [2024-07-16 00:01:29.205893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.218125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.218165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.230145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.230174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.242268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.242296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.254060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.254089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.266176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.266204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.277904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.277949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.290011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.290040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.302026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.302054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.314256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.314287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.326479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.326508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.338175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.338204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.350229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.350258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.364287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.364316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.953 [2024-07-16 00:01:29.375753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.953 [2024-07-16 00:01:29.375782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.387933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.387961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.399640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.399669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.411687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.411715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.423494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.423523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.435971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.436000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.448594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.448622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.954 [2024-07-16 00:01:29.460976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.954 [2024-07-16 00:01:29.461004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.474024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.474054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.486124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.486160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.498483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.498512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.511066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.511103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.523431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.523460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.536054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.536083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.548318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.548346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.560760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.560788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.573388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.573417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.585627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.585656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.597765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.597794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.610269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.610297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.622686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.622714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.634732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.634760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.646887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.646915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.659064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.659093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.673277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.673305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.684713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.684742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.696556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.696585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.708597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.708626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.212 [2024-07-16 00:01:29.720445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.212 [2024-07-16 00:01:29.720475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.733068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.733099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.745558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.745596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.757799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.757828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.769688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.769717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.781753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.781782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.793959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.793988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.806133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.806171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.819883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.819913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.831115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.831153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.843402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.843431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.855321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.855350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.867349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.867378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.879213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.879242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.891229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.891267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.903175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.903204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.915510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.915538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.927897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.927925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.940294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.940338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.952706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.471 [2024-07-16 00:01:29.952735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.471 [2024-07-16 00:01:29.964712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.472 [2024-07-16 00:01:29.964741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.472 [2024-07-16 00:01:29.976983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.472 [2024-07-16 00:01:29.977020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:29.989713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:29.989742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.001794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.001826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.013944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.013978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.026508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.026543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.038701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.038736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.051569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.051604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.063848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.063878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.076291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.076321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.088884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.088913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.100853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.100883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.113146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.729 [2024-07-16 00:01:30.113175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.729 [2024-07-16 00:01:30.125892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.125923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.138734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.138763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.150941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.150971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.163118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.163165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.175346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.175375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.187318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.187346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.199581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.199609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.211637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.211666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.223889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.223917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.730 [2024-07-16 00:01:30.235971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.730 [2024-07-16 00:01:30.236002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.987 [2024-07-16 00:01:30.247895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.247924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.259934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.259962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.274181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.274210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.285556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.285584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.297348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.297377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.309188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.309217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.321263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.321291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.333238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.333267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.345160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.345189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.357403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.357432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.369520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.369548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.381865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.381893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.394072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.394101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.406191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.406219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.418720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.418748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.430781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.430809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.443108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.443136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.454937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.454966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.466973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.467002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.479012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.479040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.988 [2024-07-16 00:01:30.491029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.988 [2024-07-16 00:01:30.491057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.503402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.503430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.515730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.515758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.528072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.528100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.540442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.540471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.552523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.552551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.564420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.564448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.246 [2024-07-16 00:01:30.576090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.246 [2024-07-16 00:01:30.576118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.588267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.588297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.600148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.600177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.612379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.612407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.624300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.624328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.636351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.636379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.648207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.648235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.660451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.660479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.672583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.672611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.684583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.684612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.696855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.696883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.708553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.708581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.720954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.720982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.732988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.733017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.745289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.745318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.247 [2024-07-16 00:01:30.757508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.247 [2024-07-16 00:01:30.757537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.504 [2024-07-16 00:01:30.769859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.769888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.782001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.782029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.794461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.794490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.806777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.806805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.819303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.819332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.831652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.831680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.843618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.843646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.856135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.856172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.868338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.868366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.880615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.880643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.893135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.893179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.905646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.905676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.917767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.917796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.929919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.929948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.943894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.943923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.955734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.955763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.967951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.967980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.980522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.980551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:30.993172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:30.993201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:31.005623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:31.005651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.505 [2024-07-16 00:01:31.018102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.505 [2024-07-16 00:01:31.018130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.030304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.030333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.042319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.042348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.054412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.054441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.066475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.066503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.078693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.078721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.090756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.090784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.104756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.104784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.116460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.116488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.128635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.128673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.140490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.140518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.152283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.152312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.164392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.164422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.176502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.176531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.189133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.189174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.201088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.201117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.213480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.213515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.225144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.225180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.236918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.236946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.248941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.248969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.260946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.260977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.763 [2024-07-16 00:01:31.272843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.763 [2024-07-16 00:01:31.272872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.284527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.284556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.296384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.296412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.308475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.308503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.319445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.319474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.331124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.331161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.342811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.342840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.354508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.354547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.366370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.366398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.378322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.378354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.389942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.389970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.401826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.401854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.413486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.413514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.425383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.425412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.437577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.437605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.449956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.449984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.461838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.461866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.473680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.473709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.485732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.485761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.497511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.497539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.509144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.509172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.522612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.522640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.022 [2024-07-16 00:01:31.533366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.022 [2024-07-16 00:01:31.533395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.545778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.545807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.557919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.557947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.570626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.570654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.582988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.583025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.594834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.594862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.607021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.607052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.618901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.618929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.631364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.631392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.643609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.643638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.655821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.655849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.667997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.668026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.680040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.680069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.692499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.692528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.704727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.704755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.716751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.716780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.728876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.728905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.741092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.741120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.752899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.752928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.765212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.765241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.777216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.777244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.286 [2024-07-16 00:01:31.789285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.286 [2024-07-16 00:01:31.789313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.801798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.801827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.814009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.814038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.826093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.826121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.838524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.838552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.850997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.851026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.863037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.863065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.875328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.875357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.887088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.887117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.899091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.899120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.911114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.911149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.923247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.923276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.935257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.935285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.946831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.946859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.959110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.959146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.971614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.971642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.983544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.983572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:31.995807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:31.995837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:32.007741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:32.007774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:32.019297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:32.019326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:32.031460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:32.031489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:32.043374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:32.043403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.544 [2024-07-16 00:01:32.057480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.544 [2024-07-16 00:01:32.057514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.069042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.069071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.080839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.080868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.092591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.092621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.106858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.106887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.118197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.118227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.130217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.130245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.142371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.142400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.156192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.156221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.167478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.167507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.179071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.179100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.193300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.193329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.204840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.204868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.217010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.217039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.229262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.229290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.242915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.242944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.254131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.254170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.266119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.266158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.277754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.277782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.289921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.289949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.301978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.302006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.805 [2024-07-16 00:01:32.314134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.805 [2024-07-16 00:01:32.314174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.326290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.326319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.340389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.340417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.351748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.351777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.363902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.363930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.376010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.376038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.388104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.388132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.399987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.400016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.411961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.411989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.423010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.423039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.434727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.434755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.446779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.446807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.458934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.458962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.470852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.470880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.483212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.483240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.494979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.495007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.507038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.507067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.518999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.519027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.531230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.531258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.543277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.543305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.555656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.555684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.062 [2024-07-16 00:01:32.567825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.062 [2024-07-16 00:01:32.567854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.580188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.580217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.592276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.592305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.604087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.604117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.616425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.616454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.628569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.628598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.640899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.640928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.653031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.653058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.665235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.665263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.677554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.677582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.690039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.690067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.702297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.702325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.714526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.714554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.728532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.728573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.740461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.740489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.752815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.752843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.765168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.765196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.777250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.777278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.789393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.789422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.801559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.801587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.815777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.815805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.320 [2024-07-16 00:01:32.827517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.320 [2024-07-16 00:01:32.827545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.840133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.840169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.852634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.852663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.864537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.864565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.876569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.876597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.888825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.888853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.900453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.900483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.912373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.912401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.924333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.924368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.936041] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.936070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.948268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.948296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.960435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.960473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.972385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.972416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.984428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.984456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:32.996238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:32.996266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.008428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.008456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.020561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.020590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.032510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.032538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.044584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.044613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.058596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.058625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.070491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.070520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.578 [2024-07-16 00:01:33.084676] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.578 [2024-07-16 00:01:33.084705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.096595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.096625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.108828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.108858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.120823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.120852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.132852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.132881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.144926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.144955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.157103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.157133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.169354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.169383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.181522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.181551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.193899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.193936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.206064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.206093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.218060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.218088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.230172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.230200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.243945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.243974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.254354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.254381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.266934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.266963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.279608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.279637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.291888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.291920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.304056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.304085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.316274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.316302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.328491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.328519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.837 [2024-07-16 00:01:33.340185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.837 [2024-07-16 00:01:33.340216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.352147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.352176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.363949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.363978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.375823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.375852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.388074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.388103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.400612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.400640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.413080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.413108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.424871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.424907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.436675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.436706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.448649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.448677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.460694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.460722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.472592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.472621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.484598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.484626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.496431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.496460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.510887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.510916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.522620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.522649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.536551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.536580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.548241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.548270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.559872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.559901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.571841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.571870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.584058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.584086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.595656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.595683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.095 [2024-07-16 00:01:33.607737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.095 [2024-07-16 00:01:33.607773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.352 [2024-07-16 00:01:33.619964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.352 [2024-07-16 00:01:33.619993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.631570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.631598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.643372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.643401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.655502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.655531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.667576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.667605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.681368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.681396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.692930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.692959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.704900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.704929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.716872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.716901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.728945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.728973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.741117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.741153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.753255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.753284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.765416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.765444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.777578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.777606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.789425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.789453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.801620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.801648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.813694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.813722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.825699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.825727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.837498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.837526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.849786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.849814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.353 [2024-07-16 00:01:33.861814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.353 [2024-07-16 00:01:33.861842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.874315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.874344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.886663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.886691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.898917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.898945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.911022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.911051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.923204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.923232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.935700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.935729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.947897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.947926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.959847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.959875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.971737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.971766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.983816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.983844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:33.995961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:33.995991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.007842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.007872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.020074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.020103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.032675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.032704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.048807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.048843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.060840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.060869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.073043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.073071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.085000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.085029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.097224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.097253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.109510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.109538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 [2024-07-16 00:01:34.120219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.611 [2024-07-16 00:01:34.120252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.611 00:17:59.611 Latency(us) 00:17:59.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.611 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:59.611 Nvme1n1 : 5.01 10481.51 81.89 0.00 0.00 12195.16 5315.70 19806.44 00:17:59.611 =================================================================================================================== 00:17:59.611 Total : 10481.51 81.89 0.00 0.00 12195.16 5315.70 19806.44 00:17:59.870 [2024-07-16 00:01:34.125403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.125430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.133423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.133452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.141515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.141571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.149538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.149588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.157560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.157610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.165584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.165638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.173596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.173648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.181631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.181683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.189648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.189702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.197669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.197721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.205685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.205744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.213699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.213743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.221721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.221763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.229760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.229808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.237787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.237847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.245794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.245837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.253826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.253877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.261854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.261894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.269803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.269827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.277831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.277854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 [2024-07-16 00:01:34.285847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.870 [2024-07-16 00:01:34.285870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1257595) - No such process 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1257595 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 delay0 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.870 00:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:59.870 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.870 [2024-07-16 00:01:34.366838] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:06.422 Initializing NVMe Controllers 00:18:06.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.422 Initialization complete. Launching workers. 00:18:06.422 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 57 00:18:06.422 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 344, failed to submit 33 00:18:06.422 success 154, unsuccess 190, failed 0 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.422 rmmod nvme_tcp 00:18:06.422 rmmod nvme_fabrics 00:18:06.422 rmmod nvme_keyring 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1256691 ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1256691 ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256691' 00:18:06.422 killing process with pid 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1256691 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.422 00:01:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.353 00:01:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.353 00:18:08.353 real 0m26.832s 00:18:08.353 user 0m38.736s 00:18:08.353 sys 0m8.048s 00:18:08.353 00:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:08.353 00:01:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 ************************************ 00:18:08.353 END TEST nvmf_zcopy 00:18:08.353 ************************************ 00:18:08.353 00:01:42 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:08.353 00:01:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:08.353 00:01:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:08.353 00:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.353 ************************************ 00:18:08.353 START TEST nvmf_nmic 00:18:08.353 ************************************ 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:08.353 * Looking for test storage... 00:18:08.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.353 00:01:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.354 00:01:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:10.295 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:10.295 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:10.295 Found net devices under 0000:08:00.0: cvl_0_0 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:10.295 Found net devices under 0000:08:00.1: cvl_0_1 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:18:10.295 00:18:10.295 --- 10.0.0.2 ping statistics --- 00:18:10.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.295 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:18:10.295 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:18:10.295 00:18:10.295 --- 10.0.0.1 ping statistics --- 00:18:10.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.296 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1260096 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1260096 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1260096 ']' 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.296 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.296 [2024-07-16 00:01:44.547153] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:10.296 [2024-07-16 00:01:44.547256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.296 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.296 [2024-07-16 00:01:44.613445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.296 [2024-07-16 00:01:44.706368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.296 [2024-07-16 00:01:44.706426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.296 [2024-07-16 00:01:44.706442] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.296 [2024-07-16 00:01:44.706456] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.296 [2024-07-16 00:01:44.706468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.296 [2024-07-16 00:01:44.710161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.296 [2024-07-16 00:01:44.710223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.296 [2024-07-16 00:01:44.710308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.296 [2024-07-16 00:01:44.710273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 [2024-07-16 00:01:44.856785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 Malloc0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 [2024-07-16 00:01:44.906990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:10.554 test case1: single bdev can't be used in multiple subsystems 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 [2024-07-16 00:01:44.930870] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:10.554 [2024-07-16 00:01:44.930902] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:10.554 [2024-07-16 00:01:44.930919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.554 request: 00:18:10.554 { 00:18:10.554 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.554 "namespace": { 00:18:10.554 "bdev_name": "Malloc0", 00:18:10.554 "no_auto_visible": false 00:18:10.554 }, 00:18:10.554 "method": "nvmf_subsystem_add_ns", 00:18:10.554 "req_id": 1 00:18:10.554 } 00:18:10.554 Got JSON-RPC error response 00:18:10.554 response: 00:18:10.554 { 00:18:10.554 "code": -32602, 00:18:10.554 "message": "Invalid parameters" 00:18:10.554 } 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:10.554 Adding namespace failed - expected result. 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:10.554 test case2: host connect to nvmf target in multiple paths 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:10.554 [2024-07-16 00:01:44.938981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.554 00:01:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.119 00:01:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:11.377 00:01:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:11.377 00:01:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:11.377 00:01:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.377 00:01:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:11.377 00:01:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:13.930 00:01:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:13.930 [global] 00:18:13.930 thread=1 00:18:13.930 invalidate=1 00:18:13.930 rw=write 00:18:13.930 time_based=1 00:18:13.930 runtime=1 00:18:13.930 ioengine=libaio 00:18:13.930 direct=1 00:18:13.930 bs=4096 00:18:13.930 iodepth=1 00:18:13.930 norandommap=0 00:18:13.930 numjobs=1 00:18:13.930 00:18:13.930 verify_dump=1 00:18:13.930 verify_backlog=512 00:18:13.930 verify_state_save=0 00:18:13.930 do_verify=1 00:18:13.930 verify=crc32c-intel 00:18:13.930 [job0] 00:18:13.930 filename=/dev/nvme0n1 00:18:13.930 Could not set queue depth (nvme0n1) 00:18:13.930 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.930 fio-3.35 00:18:13.930 Starting 1 thread 00:18:14.863 00:18:14.863 job0: (groupid=0, jobs=1): err= 0: pid=1260475: Tue Jul 16 00:01:49 2024 00:18:14.863 read: IOPS=1743, BW=6974KiB/s (7141kB/s)(6988KiB/1002msec) 00:18:14.863 slat (nsec): min=5951, max=48334, avg=9743.84, stdev=4562.95 00:18:14.863 clat (usec): min=191, max=41970, avg=349.33, stdev=2198.58 00:18:14.863 lat (usec): min=198, max=42012, avg=359.07, stdev=2199.59 00:18:14.863 clat percentiles (usec): 00:18:14.863 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:18:14.863 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:18:14.863 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 269], 00:18:14.863 | 99.00th=[ 289], 99.50th=[ 322], 99.90th=[41681], 99.95th=[42206], 00:18:14.863 | 99.99th=[42206] 00:18:14.863 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:18:14.863 slat (nsec): min=7604, max=44966, avg=12221.10, stdev=5451.43 00:18:14.863 clat (usec): min=127, max=302, avg=164.13, stdev=21.72 00:18:14.864 lat (usec): min=136, max=335, avg=176.35, stdev=25.68 00:18:14.864 clat percentiles (usec): 00:18:14.864 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:18:14.864 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 167], 00:18:14.864 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 200], 00:18:14.864 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 285], 99.95th=[ 302], 00:18:14.864 | 99.99th=[ 302] 00:18:14.864 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=8192.00, stdev=5792.62, samples=2 00:18:14.864 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:18:14.864 lat (usec) : 250=88.35%, 500=11.52% 00:18:14.864 lat (msec) : 50=0.13% 00:18:14.864 cpu : usr=3.40%, sys=5.89%, ctx=3795, majf=0, minf=1 00:18:14.864 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.864 issued rwts: total=1747,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.864 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.864 00:18:14.864 Run status group 0 (all jobs): 00:18:14.864 READ: bw=6974KiB/s (7141kB/s), 6974KiB/s-6974KiB/s (7141kB/s-7141kB/s), io=6988KiB (7156kB), run=1002-1002msec 00:18:14.864 WRITE: bw=8176KiB/s (8372kB/s), 8176KiB/s-8176KiB/s (8372kB/s-8372kB/s), io=8192KiB (8389kB), run=1002-1002msec 00:18:14.864 00:18:14.864 Disk stats (read/write): 00:18:14.864 nvme0n1: ios=1721/2048, merge=0/0, ticks=523/310, in_queue=833, util=91.78% 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.864 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.864 rmmod nvme_tcp 00:18:14.864 rmmod nvme_fabrics 00:18:14.864 rmmod nvme_keyring 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1260096 ']' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1260096 ']' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260096' 00:18:15.123 killing process with pid 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1260096 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.123 00:01:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.661 00:01:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.661 00:18:17.661 real 0m8.881s 00:18:17.661 user 0m19.969s 00:18:17.661 sys 0m2.057s 00:18:17.661 00:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:17.661 00:01:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:17.661 ************************************ 00:18:17.661 END TEST nvmf_nmic 00:18:17.661 ************************************ 00:18:17.661 00:01:51 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:17.661 00:01:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:17.661 00:01:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:17.661 00:01:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.661 ************************************ 00:18:17.661 START TEST nvmf_fio_target 00:18:17.661 ************************************ 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:17.661 * Looking for test storage... 00:18:17.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:17.661 00:01:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:19.038 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:19.039 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:19.039 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:19.039 Found net devices under 0000:08:00.0: cvl_0_0 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:19.039 Found net devices under 0000:08:00.1: cvl_0_1 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:19.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:18:19.039 00:18:19.039 --- 10.0.0.2 ping statistics --- 00:18:19.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.039 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:18:19.039 00:18:19.039 --- 10.0.0.1 ping statistics --- 00:18:19.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.039 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.039 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1262074 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1262074 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1262074 ']' 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:19.297 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.297 [2024-07-16 00:01:53.616611] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:19.297 [2024-07-16 00:01:53.616701] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.297 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.297 [2024-07-16 00:01:53.680703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.297 [2024-07-16 00:01:53.768368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.297 [2024-07-16 00:01:53.768424] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.297 [2024-07-16 00:01:53.768441] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.297 [2024-07-16 00:01:53.768455] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.297 [2024-07-16 00:01:53.768467] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.297 [2024-07-16 00:01:53.768548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.297 [2024-07-16 00:01:53.768605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.297 [2024-07-16 00:01:53.768656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.297 [2024-07-16 00:01:53.768660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.554 00:01:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:19.811 [2024-07-16 00:01:54.178661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.811 00:01:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.069 00:01:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:20.069 00:01:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.327 00:01:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:20.327 00:01:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.892 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:20.892 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:20.892 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:20.892 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:21.150 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.408 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:21.408 00:01:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.666 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:21.666 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:21.924 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:21.924 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:22.181 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:22.439 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:22.439 00:01:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:22.697 00:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:22.697 00:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.954 00:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.211 [2024-07-16 00:01:57.548956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.211 00:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:23.469 00:01:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:23.726 00:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:24.292 00:01:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:26.189 00:02:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:26.190 00:02:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:26.190 [global] 00:18:26.190 thread=1 00:18:26.190 invalidate=1 00:18:26.190 rw=write 00:18:26.190 time_based=1 00:18:26.190 runtime=1 00:18:26.190 ioengine=libaio 00:18:26.190 direct=1 00:18:26.190 bs=4096 00:18:26.190 iodepth=1 00:18:26.190 norandommap=0 00:18:26.190 numjobs=1 00:18:26.190 00:18:26.190 verify_dump=1 00:18:26.190 verify_backlog=512 00:18:26.190 verify_state_save=0 00:18:26.190 do_verify=1 00:18:26.190 verify=crc32c-intel 00:18:26.190 [job0] 00:18:26.190 filename=/dev/nvme0n1 00:18:26.190 [job1] 00:18:26.190 filename=/dev/nvme0n2 00:18:26.190 [job2] 00:18:26.190 filename=/dev/nvme0n3 00:18:26.190 [job3] 00:18:26.190 filename=/dev/nvme0n4 00:18:26.190 Could not set queue depth (nvme0n1) 00:18:26.190 Could not set queue depth (nvme0n2) 00:18:26.190 Could not set queue depth (nvme0n3) 00:18:26.190 Could not set queue depth (nvme0n4) 00:18:26.448 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.448 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.448 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.448 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:26.448 fio-3.35 00:18:26.448 Starting 4 threads 00:18:27.818 00:18:27.818 job0: (groupid=0, jobs=1): err= 0: pid=1262849: Tue Jul 16 00:02:02 2024 00:18:27.818 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:18:27.818 slat (nsec): min=7563, max=35410, avg=21451.81, stdev=9236.73 00:18:27.818 clat (usec): min=40891, max=41389, avg=40992.94, stdev=100.96 00:18:27.818 lat (usec): min=40924, max=41396, avg=41014.39, stdev=96.21 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:27.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:27.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:27.818 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:27.818 | 99.99th=[41157] 00:18:27.818 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:27.818 slat (nsec): min=7750, max=44501, avg=11489.08, stdev=6299.13 00:18:27.818 clat (usec): min=198, max=652, avg=258.13, stdev=39.85 00:18:27.818 lat (usec): min=215, max=674, avg=269.62, stdev=41.24 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 241], 00:18:27.818 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:18:27.818 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 351], 00:18:27.818 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 652], 99.95th=[ 652], 00:18:27.818 | 99.99th=[ 652] 00:18:27.818 bw ( KiB/s): min= 4096, max= 4096, per=19.73%, avg=4096.00, stdev= 0.00, samples=1 00:18:27.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:27.818 lat (usec) : 250=69.04%, 500=26.83%, 750=0.19% 00:18:27.818 lat (msec) : 50=3.94% 00:18:27.818 cpu : usr=0.70%, sys=0.50%, ctx=533, majf=0, minf=1 00:18:27.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.818 job1: (groupid=0, jobs=1): err= 0: pid=1262865: Tue Jul 16 00:02:02 2024 00:18:27.818 read: IOPS=1083, BW=4335KiB/s (4439kB/s)(4400KiB/1015msec) 00:18:27.818 slat (nsec): min=6414, max=35754, avg=8553.62, stdev=4194.64 00:18:27.818 clat (usec): min=219, max=41063, avg=586.16, stdev=3660.55 00:18:27.818 lat (usec): min=226, max=41080, avg=594.71, stdev=3662.11 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:18:27.818 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:18:27.818 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 302], 00:18:27.818 | 99.00th=[ 478], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:27.818 | 99.99th=[41157] 00:18:27.818 write: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec); 0 zone resets 00:18:27.818 slat (usec): min=8, max=12867, avg=22.37, stdev=328.03 00:18:27.818 clat (usec): min=155, max=1317, avg=206.93, stdev=44.87 00:18:27.818 lat (usec): min=163, max=14184, avg=229.31, stdev=358.27 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:18:27.818 | 30.00th=[ 178], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 217], 00:18:27.818 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:18:27.818 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 424], 99.95th=[ 1319], 00:18:27.818 | 99.99th=[ 1319] 00:18:27.818 bw ( KiB/s): min= 4096, max= 8192, per=29.59%, avg=6144.00, stdev=2896.31, samples=2 00:18:27.818 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:27.818 lat (usec) : 250=78.98%, 500=20.64% 00:18:27.818 lat (msec) : 2=0.04%, 50=0.34% 00:18:27.818 cpu : usr=2.17%, sys=4.14%, ctx=2638, majf=0, minf=1 00:18:27.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 issued rwts: total=1100,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.818 job2: (groupid=0, jobs=1): err= 0: pid=1262870: Tue Jul 16 00:02:02 2024 00:18:27.818 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:27.818 slat (nsec): min=5889, max=49011, avg=10501.13, stdev=3639.36 00:18:27.818 clat (usec): min=220, max=41041, avg=702.00, stdev=4194.92 00:18:27.818 lat (usec): min=227, max=41056, avg=712.51, stdev=4195.60 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:18:27.818 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:18:27.818 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 00:18:27.818 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:27.818 | 99.99th=[41157] 00:18:27.818 write: IOPS=1171, BW=4687KiB/s (4800kB/s)(4692KiB/1001msec); 0 zone resets 00:18:27.818 slat (nsec): min=6677, max=60027, avg=13313.34, stdev=5841.05 00:18:27.818 clat (usec): min=158, max=397, avg=210.90, stdev=40.35 00:18:27.818 lat (usec): min=166, max=407, avg=224.22, stdev=41.27 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:18:27.818 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:18:27.818 | 70.00th=[ 221], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 302], 00:18:27.818 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 383], 99.95th=[ 400], 00:18:27.818 | 99.99th=[ 400] 00:18:27.818 bw ( KiB/s): min= 4096, max= 4096, per=19.73%, avg=4096.00, stdev= 0.00, samples=1 00:18:27.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:27.818 lat (usec) : 250=63.45%, 500=36.05% 00:18:27.818 lat (msec) : 50=0.50% 00:18:27.818 cpu : usr=1.80%, sys=2.50%, ctx=2197, majf=0, minf=1 00:18:27.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 issued rwts: total=1024,1173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.818 job3: (groupid=0, jobs=1): err= 0: pid=1262871: Tue Jul 16 00:02:02 2024 00:18:27.818 read: IOPS=1945, BW=7780KiB/s (7967kB/s)(7788KiB/1001msec) 00:18:27.818 slat (nsec): min=6207, max=45578, avg=9877.54, stdev=4447.22 00:18:27.818 clat (usec): min=221, max=495, avg=265.74, stdev=34.02 00:18:27.818 lat (usec): min=228, max=511, avg=275.62, stdev=37.03 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 239], 00:18:27.818 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 269], 00:18:27.818 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:18:27.818 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 486], 99.95th=[ 494], 00:18:27.818 | 99.99th=[ 494] 00:18:27.818 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:27.818 slat (nsec): min=7835, max=51709, avg=14308.29, stdev=6244.26 00:18:27.818 clat (usec): min=160, max=503, avg=205.19, stdev=24.05 00:18:27.818 lat (usec): min=168, max=524, avg=219.50, stdev=27.50 00:18:27.818 clat percentiles (usec): 00:18:27.818 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:18:27.818 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:18:27.818 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 241], 00:18:27.818 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 334], 99.95th=[ 355], 00:18:27.818 | 99.99th=[ 502] 00:18:27.818 bw ( KiB/s): min= 8992, max= 8992, per=43.30%, avg=8992.00, stdev= 0.00, samples=1 00:18:27.818 iops : min= 2248, max= 2248, avg=2248.00, stdev= 0.00, samples=1 00:18:27.818 lat (usec) : 250=70.21%, 500=29.76%, 750=0.03% 00:18:27.818 cpu : usr=4.30%, sys=6.20%, ctx=3995, majf=0, minf=1 00:18:27.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:27.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.818 issued rwts: total=1947,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:27.818 00:18:27.818 Run status group 0 (all jobs): 00:18:27.818 READ: bw=15.7MiB/s (16.5MB/s), 83.9KiB/s-7780KiB/s (85.9kB/s-7967kB/s), io=16.0MiB (16.8MB), run=1001-1015msec 00:18:27.818 WRITE: bw=20.3MiB/s (21.3MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=20.6MiB (21.6MB), run=1001-1015msec 00:18:27.818 00:18:27.818 Disk stats (read/write): 00:18:27.818 nvme0n1: ios=67/512, merge=0/0, ticks=735/130, in_queue=865, util=87.37% 00:18:27.818 nvme0n2: ios=1138/1536, merge=0/0, ticks=651/305, in_queue=956, util=90.45% 00:18:27.818 nvme0n3: ios=660/1024, merge=0/0, ticks=707/210, in_queue=917, util=95.31% 00:18:27.818 nvme0n4: ios=1593/1916, merge=0/0, ticks=456/372, in_queue=828, util=95.90% 00:18:27.818 00:02:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:27.818 [global] 00:18:27.818 thread=1 00:18:27.818 invalidate=1 00:18:27.818 rw=randwrite 00:18:27.818 time_based=1 00:18:27.818 runtime=1 00:18:27.818 ioengine=libaio 00:18:27.818 direct=1 00:18:27.818 bs=4096 00:18:27.818 iodepth=1 00:18:27.818 norandommap=0 00:18:27.818 numjobs=1 00:18:27.818 00:18:27.818 verify_dump=1 00:18:27.818 verify_backlog=512 00:18:27.818 verify_state_save=0 00:18:27.818 do_verify=1 00:18:27.818 verify=crc32c-intel 00:18:27.818 [job0] 00:18:27.818 filename=/dev/nvme0n1 00:18:27.818 [job1] 00:18:27.818 filename=/dev/nvme0n2 00:18:27.818 [job2] 00:18:27.818 filename=/dev/nvme0n3 00:18:27.818 [job3] 00:18:27.818 filename=/dev/nvme0n4 00:18:27.818 Could not set queue depth (nvme0n1) 00:18:27.818 Could not set queue depth (nvme0n2) 00:18:27.818 Could not set queue depth (nvme0n3) 00:18:27.818 Could not set queue depth (nvme0n4) 00:18:27.818 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:27.818 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:27.818 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:27.818 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:27.818 fio-3.35 00:18:27.818 Starting 4 threads 00:18:29.190 00:18:29.190 job0: (groupid=0, jobs=1): err= 0: pid=1263042: Tue Jul 16 00:02:03 2024 00:18:29.190 read: IOPS=1006, BW=4027KiB/s (4124kB/s)(4136KiB/1027msec) 00:18:29.190 slat (nsec): min=5359, max=32035, avg=6377.77, stdev=2794.94 00:18:29.190 clat (usec): min=215, max=41321, avg=649.63, stdev=3986.16 00:18:29.190 lat (usec): min=221, max=41338, avg=656.01, stdev=3988.30 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:18:29.190 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:18:29.190 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 326], 00:18:29.190 | 99.00th=[ 449], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:29.190 | 99.99th=[41157] 00:18:29.190 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:18:29.190 slat (nsec): min=6713, max=58336, avg=11444.08, stdev=6688.53 00:18:29.190 clat (usec): min=151, max=3733, avg=211.20, stdev=97.26 00:18:29.190 lat (usec): min=159, max=3740, avg=222.65, stdev=98.24 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:18:29.190 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:18:29.190 | 70.00th=[ 217], 80.00th=[ 235], 90.00th=[ 260], 95.00th=[ 285], 00:18:29.190 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 3720], 00:18:29.190 | 99.99th=[ 3720] 00:18:29.190 bw ( KiB/s): min= 4096, max= 8192, per=34.47%, avg=6144.00, stdev=2896.31, samples=2 00:18:29.190 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:18:29.190 lat (usec) : 250=71.60%, 500=27.98% 00:18:29.190 lat (msec) : 4=0.04%, 50=0.39% 00:18:29.190 cpu : usr=1.95%, sys=1.66%, ctx=2573, majf=0, minf=1 00:18:29.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.190 job1: (groupid=0, jobs=1): err= 0: pid=1263043: Tue Jul 16 00:02:03 2024 00:18:29.190 read: IOPS=948, BW=3792KiB/s (3883kB/s)(3796KiB/1001msec) 00:18:29.190 slat (nsec): min=6502, max=47980, avg=13930.25, stdev=4360.82 00:18:29.190 clat (usec): min=230, max=41322, avg=765.94, stdev=4150.53 00:18:29.190 lat (usec): min=237, max=41330, avg=779.87, stdev=4150.46 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 260], 00:18:29.190 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 330], 00:18:29.190 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 478], 00:18:29.190 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:29.190 | 99.99th=[41157] 00:18:29.190 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:29.190 slat (nsec): min=7585, max=56517, avg=15832.36, stdev=6297.05 00:18:29.190 clat (usec): min=168, max=473, avg=229.28, stdev=28.46 00:18:29.190 lat (usec): min=177, max=487, avg=245.11, stdev=29.65 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 208], 00:18:29.190 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:18:29.190 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 285], 00:18:29.190 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 420], 99.95th=[ 474], 00:18:29.190 | 99.99th=[ 474] 00:18:29.190 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.190 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.190 lat (usec) : 250=49.11%, 500=49.37%, 750=0.76%, 1000=0.10% 00:18:29.190 lat (msec) : 2=0.15%, 50=0.51% 00:18:29.190 cpu : usr=2.50%, sys=4.00%, ctx=1973, majf=0, minf=1 00:18:29.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 issued rwts: total=949,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.190 job2: (groupid=0, jobs=1): err= 0: pid=1263044: Tue Jul 16 00:02:03 2024 00:18:29.190 read: IOPS=48, BW=193KiB/s (198kB/s)(200KiB/1034msec) 00:18:29.190 slat (nsec): min=7816, max=40022, avg=21948.22, stdev=8731.13 00:18:29.190 clat (usec): min=418, max=41064, avg=18233.29, stdev=20259.58 00:18:29.190 lat (usec): min=432, max=41078, avg=18255.24, stdev=20260.07 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 420], 5.00th=[ 429], 10.00th=[ 429], 20.00th=[ 449], 00:18:29.190 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[40633], 00:18:29.190 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:29.190 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:29.190 | 99.99th=[41157] 00:18:29.190 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:18:29.190 slat (nsec): min=6734, max=41356, avg=10272.83, stdev=5735.21 00:18:29.190 clat (usec): min=157, max=1403, avg=223.19, stdev=86.31 00:18:29.190 lat (usec): min=165, max=1411, avg=233.46, stdev=88.06 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 190], 00:18:29.190 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:18:29.190 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 289], 95.00th=[ 314], 00:18:29.190 | 99.00th=[ 510], 99.50th=[ 783], 99.90th=[ 1401], 99.95th=[ 1401], 00:18:29.190 | 99.99th=[ 1401] 00:18:29.190 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:18:29.190 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:29.190 lat (usec) : 250=78.11%, 500=16.90%, 750=0.36%, 1000=0.53% 00:18:29.190 lat (msec) : 2=0.18%, 50=3.91% 00:18:29.190 cpu : usr=0.00%, sys=0.97%, ctx=563, majf=0, minf=1 00:18:29.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 issued rwts: total=50,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.190 job3: (groupid=0, jobs=1): err= 0: pid=1263045: Tue Jul 16 00:02:03 2024 00:18:29.190 read: IOPS=1042, BW=4172KiB/s (4272kB/s)(4280KiB/1026msec) 00:18:29.190 slat (nsec): min=9502, max=33167, avg=11576.25, stdev=2625.10 00:18:29.190 clat (usec): min=227, max=40971, avg=630.15, stdev=3915.37 00:18:29.190 lat (usec): min=237, max=41000, avg=641.72, stdev=3917.15 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 239], 00:18:29.190 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:18:29.190 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 293], 00:18:29.190 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:29.190 | 99.99th=[41157] 00:18:29.190 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:18:29.190 slat (usec): min=8, max=4607, avg=16.78, stdev=117.30 00:18:29.190 clat (usec): min=161, max=467, avg=198.48, stdev=24.69 00:18:29.190 lat (usec): min=170, max=4843, avg=215.26, stdev=121.05 00:18:29.190 clat percentiles (usec): 00:18:29.190 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:18:29.190 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:18:29.190 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 239], 00:18:29.190 | 99.00th=[ 273], 99.50th=[ 347], 99.90th=[ 433], 99.95th=[ 469], 00:18:29.190 | 99.99th=[ 469] 00:18:29.190 bw ( KiB/s): min= 3920, max= 8368, per=34.47%, avg=6144.00, stdev=3145.21, samples=2 00:18:29.190 iops : min= 980, max= 2092, avg=1536.00, stdev=786.30, samples=2 00:18:29.190 lat (usec) : 250=88.33%, 500=11.24%, 750=0.04% 00:18:29.190 lat (msec) : 50=0.38% 00:18:29.190 cpu : usr=1.37%, sys=3.61%, ctx=2608, majf=0, minf=1 00:18:29.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:29.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.190 issued rwts: total=1070,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:29.190 00:18:29.190 Run status group 0 (all jobs): 00:18:29.190 READ: bw=11.7MiB/s (12.3MB/s), 193KiB/s-4172KiB/s (198kB/s-4272kB/s), io=12.1MiB (12.7MB), run=1001-1034msec 00:18:29.190 WRITE: bw=17.4MiB/s (18.3MB/s), 1981KiB/s-5988KiB/s (2028kB/s-6132kB/s), io=18.0MiB (18.9MB), run=1001-1034msec 00:18:29.190 00:18:29.190 Disk stats (read/write): 00:18:29.190 nvme0n1: ios=1082/1536, merge=0/0, ticks=1179/316, in_queue=1495, util=94.39% 00:18:29.190 nvme0n2: ios=613/1024, merge=0/0, ticks=735/221, in_queue=956, util=95.43% 00:18:29.190 nvme0n3: ios=62/512, merge=0/0, ticks=1649/116, in_queue=1765, util=96.98% 00:18:29.190 nvme0n4: ios=1119/1536, merge=0/0, ticks=1131/301, in_queue=1432, util=96.33% 00:18:29.190 00:02:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:29.190 [global] 00:18:29.190 thread=1 00:18:29.190 invalidate=1 00:18:29.190 rw=write 00:18:29.190 time_based=1 00:18:29.190 runtime=1 00:18:29.190 ioengine=libaio 00:18:29.190 direct=1 00:18:29.190 bs=4096 00:18:29.190 iodepth=128 00:18:29.190 norandommap=0 00:18:29.190 numjobs=1 00:18:29.190 00:18:29.190 verify_dump=1 00:18:29.190 verify_backlog=512 00:18:29.190 verify_state_save=0 00:18:29.190 do_verify=1 00:18:29.190 verify=crc32c-intel 00:18:29.190 [job0] 00:18:29.190 filename=/dev/nvme0n1 00:18:29.190 [job1] 00:18:29.190 filename=/dev/nvme0n2 00:18:29.190 [job2] 00:18:29.190 filename=/dev/nvme0n3 00:18:29.190 [job3] 00:18:29.190 filename=/dev/nvme0n4 00:18:29.190 Could not set queue depth (nvme0n1) 00:18:29.190 Could not set queue depth (nvme0n2) 00:18:29.190 Could not set queue depth (nvme0n3) 00:18:29.191 Could not set queue depth (nvme0n4) 00:18:29.450 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.450 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.450 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.450 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.450 fio-3.35 00:18:29.450 Starting 4 threads 00:18:30.417 00:18:30.417 job0: (groupid=0, jobs=1): err= 0: pid=1263224: Tue Jul 16 00:02:04 2024 00:18:30.417 read: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1003msec) 00:18:30.417 slat (usec): min=3, max=46093, avg=150.27, stdev=1167.36 00:18:30.417 clat (usec): min=1711, max=80230, avg=20137.20, stdev=16880.20 00:18:30.417 lat (usec): min=2014, max=87444, avg=20287.47, stdev=16968.37 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 5473], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:18:30.417 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:18:30.417 | 70.00th=[13829], 80.00th=[25035], 90.00th=[51643], 95.00th=[60031], 00:18:30.417 | 99.00th=[73925], 99.50th=[76022], 99.90th=[80217], 99.95th=[80217], 00:18:30.417 | 99.99th=[80217] 00:18:30.417 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:18:30.417 slat (usec): min=4, max=12527, avg=121.57, stdev=581.37 00:18:30.417 clat (usec): min=8535, max=79527, avg=15833.21, stdev=10281.05 00:18:30.417 lat (usec): min=8558, max=79548, avg=15954.79, stdev=10370.82 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[11076], 00:18:30.417 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:18:30.417 | 70.00th=[14222], 80.00th=[18220], 90.00th=[22152], 95.00th=[32375], 00:18:30.417 | 99.00th=[67634], 99.50th=[68682], 99.90th=[79168], 99.95th=[79168], 00:18:30.417 | 99.99th=[79168] 00:18:30.417 bw ( KiB/s): min=12288, max=16384, per=25.81%, avg=14336.00, stdev=2896.31, samples=2 00:18:30.417 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:18:30.417 lat (msec) : 2=0.01%, 4=0.17%, 10=3.88%, 20=75.61%, 50=12.93% 00:18:30.417 lat (msec) : 100=7.39% 00:18:30.417 cpu : usr=6.49%, sys=7.29%, ctx=366, majf=0, minf=15 00:18:30.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:30.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.417 issued rwts: total=3469,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.417 job1: (groupid=0, jobs=1): err= 0: pid=1263225: Tue Jul 16 00:02:04 2024 00:18:30.417 read: IOPS=3043, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:18:30.417 slat (usec): min=4, max=19535, avg=172.38, stdev=1285.21 00:18:30.417 clat (usec): min=2300, max=70314, avg=20916.91, stdev=9603.18 00:18:30.417 lat (usec): min=7747, max=70320, avg=21089.30, stdev=9719.26 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 7898], 5.00th=[10683], 10.00th=[11076], 20.00th=[11731], 00:18:30.417 | 30.00th=[17171], 40.00th=[18482], 50.00th=[19792], 60.00th=[20317], 00:18:30.417 | 70.00th=[21103], 80.00th=[26084], 90.00th=[32900], 95.00th=[37487], 00:18:30.417 | 99.00th=[60031], 99.50th=[62653], 99.90th=[70779], 99.95th=[70779], 00:18:30.417 | 99.99th=[70779] 00:18:30.417 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:18:30.417 slat (usec): min=5, max=15816, avg=147.67, stdev=890.87 00:18:30.417 clat (usec): min=1429, max=70317, avg=20678.24, stdev=8638.29 00:18:30.417 lat (usec): min=1439, max=70325, avg=20825.91, stdev=8698.51 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[13566], 00:18:30.417 | 30.00th=[15008], 40.00th=[16450], 50.00th=[20579], 60.00th=[21890], 00:18:30.417 | 70.00th=[21890], 80.00th=[23987], 90.00th=[33817], 95.00th=[38536], 00:18:30.417 | 99.00th=[50070], 99.50th=[55313], 99.90th=[55313], 99.95th=[70779], 00:18:30.417 | 99.99th=[70779] 00:18:30.417 bw ( KiB/s): min=11664, max=12912, per=22.12%, avg=12288.00, stdev=882.47, samples=2 00:18:30.417 iops : min= 2916, max= 3228, avg=3072.00, stdev=220.62, samples=2 00:18:30.417 lat (msec) : 2=0.05%, 4=0.02%, 10=1.12%, 20=48.51%, 50=48.87% 00:18:30.417 lat (msec) : 100=1.43% 00:18:30.417 cpu : usr=2.78%, sys=4.08%, ctx=284, majf=0, minf=13 00:18:30.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:30.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.417 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.417 job2: (groupid=0, jobs=1): err= 0: pid=1263226: Tue Jul 16 00:02:04 2024 00:18:30.417 read: IOPS=3427, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1007msec) 00:18:30.417 slat (usec): min=2, max=19318, avg=143.59, stdev=953.75 00:18:30.417 clat (usec): min=5544, max=70999, avg=18033.63, stdev=10815.46 00:18:30.417 lat (usec): min=6140, max=71036, avg=18177.22, stdev=10910.49 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 6587], 5.00th=[ 9503], 10.00th=[11338], 20.00th=[12780], 00:18:30.417 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14746], 00:18:30.417 | 70.00th=[15926], 80.00th=[18220], 90.00th=[37487], 95.00th=[42206], 00:18:30.417 | 99.00th=[58459], 99.50th=[58459], 99.90th=[63177], 99.95th=[69731], 00:18:30.417 | 99.99th=[70779] 00:18:30.417 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:18:30.417 slat (usec): min=4, max=17202, avg=133.57, stdev=826.71 00:18:30.417 clat (usec): min=6388, max=57342, avg=18201.44, stdev=8144.75 00:18:30.417 lat (usec): min=6393, max=57375, avg=18335.01, stdev=8219.93 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 6390], 5.00th=[ 9765], 10.00th=[11731], 20.00th=[12780], 00:18:30.417 | 30.00th=[12911], 40.00th=[13829], 50.00th=[14091], 60.00th=[16450], 00:18:30.417 | 70.00th=[20841], 80.00th=[23725], 90.00th=[31327], 95.00th=[35914], 00:18:30.417 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47973], 99.95th=[55313], 00:18:30.417 | 99.99th=[57410] 00:18:30.417 bw ( KiB/s): min=10952, max=17720, per=25.81%, avg=14336.00, stdev=4785.70, samples=2 00:18:30.417 iops : min= 2738, max= 4430, avg=3584.00, stdev=1196.42, samples=2 00:18:30.417 lat (msec) : 10=5.86%, 20=70.35%, 50=21.95%, 100=1.85% 00:18:30.417 cpu : usr=2.68%, sys=5.47%, ctx=304, majf=0, minf=7 00:18:30.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:30.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.417 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.417 job3: (groupid=0, jobs=1): err= 0: pid=1263227: Tue Jul 16 00:02:04 2024 00:18:30.417 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:18:30.417 slat (usec): min=4, max=32029, avg=145.03, stdev=1102.66 00:18:30.417 clat (usec): min=4162, max=65201, avg=18213.94, stdev=10834.31 00:18:30.417 lat (usec): min=4169, max=65228, avg=18358.97, stdev=10906.14 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 6587], 5.00th=[10290], 10.00th=[11600], 20.00th=[12649], 00:18:30.417 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13435], 60.00th=[14091], 00:18:30.417 | 70.00th=[16319], 80.00th=[21890], 90.00th=[33162], 95.00th=[49546], 00:18:30.417 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:18:30.417 | 99.99th=[65274] 00:18:30.417 write: IOPS=3735, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1002msec); 0 zone resets 00:18:30.417 slat (usec): min=5, max=17090, avg=114.42, stdev=647.24 00:18:30.417 clat (usec): min=386, max=56422, avg=16445.17, stdev=6258.50 00:18:30.417 lat (usec): min=1047, max=56435, avg=16559.60, stdev=6309.76 00:18:30.417 clat percentiles (usec): 00:18:30.417 | 1.00th=[ 2540], 5.00th=[ 6194], 10.00th=[ 9896], 20.00th=[12125], 00:18:30.417 | 30.00th=[12911], 40.00th=[14091], 50.00th=[15139], 60.00th=[17695], 00:18:30.417 | 70.00th=[20055], 80.00th=[21890], 90.00th=[22152], 95.00th=[24249], 00:18:30.417 | 99.00th=[36963], 99.50th=[37487], 99.90th=[55837], 99.95th=[56361], 00:18:30.417 | 99.99th=[56361] 00:18:30.417 bw ( KiB/s): min=12656, max=16264, per=26.03%, avg=14460.00, stdev=2551.24, samples=2 00:18:30.417 iops : min= 3164, max= 4066, avg=3615.00, stdev=637.81, samples=2 00:18:30.417 lat (usec) : 500=0.01%, 1000=0.01% 00:18:30.417 lat (msec) : 2=0.33%, 4=0.70%, 10=6.03%, 20=65.33%, 50=26.29% 00:18:30.417 lat (msec) : 100=1.30% 00:18:30.417 cpu : usr=5.89%, sys=8.49%, ctx=412, majf=0, minf=15 00:18:30.417 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:30.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.417 issued rwts: total=3584,3743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.417 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.417 00:18:30.417 Run status group 0 (all jobs): 00:18:30.417 READ: bw=52.6MiB/s (55.2MB/s), 11.9MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=53.0MiB (55.6MB), run=1002-1007msec 00:18:30.417 WRITE: bw=54.2MiB/s (56.9MB/s), 11.9MiB/s-14.6MiB/s (12.5MB/s-15.3MB/s), io=54.6MiB (57.3MB), run=1002-1007msec 00:18:30.417 00:18:30.418 Disk stats (read/write): 00:18:30.418 nvme0n1: ios=2610/2901, merge=0/0, ticks=15244/12633, in_queue=27877, util=87.37% 00:18:30.418 nvme0n2: ios=2402/2560, merge=0/0, ticks=50378/54690, in_queue=105068, util=100.00% 00:18:30.418 nvme0n3: ios=3129/3491, merge=0/0, ticks=22767/29502, in_queue=52269, util=90.94% 00:18:30.418 nvme0n4: ios=2837/3072, merge=0/0, ticks=47868/44256, in_queue=92124, util=97.79% 00:18:30.418 00:02:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:30.418 [global] 00:18:30.418 thread=1 00:18:30.418 invalidate=1 00:18:30.418 rw=randwrite 00:18:30.418 time_based=1 00:18:30.418 runtime=1 00:18:30.418 ioengine=libaio 00:18:30.418 direct=1 00:18:30.418 bs=4096 00:18:30.418 iodepth=128 00:18:30.418 norandommap=0 00:18:30.418 numjobs=1 00:18:30.418 00:18:30.676 verify_dump=1 00:18:30.676 verify_backlog=512 00:18:30.676 verify_state_save=0 00:18:30.676 do_verify=1 00:18:30.676 verify=crc32c-intel 00:18:30.676 [job0] 00:18:30.676 filename=/dev/nvme0n1 00:18:30.676 [job1] 00:18:30.676 filename=/dev/nvme0n2 00:18:30.676 [job2] 00:18:30.676 filename=/dev/nvme0n3 00:18:30.676 [job3] 00:18:30.676 filename=/dev/nvme0n4 00:18:30.676 Could not set queue depth (nvme0n1) 00:18:30.676 Could not set queue depth (nvme0n2) 00:18:30.676 Could not set queue depth (nvme0n3) 00:18:30.676 Could not set queue depth (nvme0n4) 00:18:30.676 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:30.676 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:30.676 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:30.676 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:30.676 fio-3.35 00:18:30.676 Starting 4 threads 00:18:32.045 00:18:32.045 job0: (groupid=0, jobs=1): err= 0: pid=1263400: Tue Jul 16 00:02:06 2024 00:18:32.045 read: IOPS=4928, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1004msec) 00:18:32.045 slat (usec): min=3, max=6233, avg=90.26, stdev=516.38 00:18:32.045 clat (usec): min=2400, max=26389, avg=11865.93, stdev=2283.05 00:18:32.045 lat (usec): min=3264, max=26404, avg=11956.18, stdev=2328.01 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11076], 00:18:32.045 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:18:32.045 | 70.00th=[11600], 80.00th=[12387], 90.00th=[14484], 95.00th=[16909], 00:18:32.045 | 99.00th=[21627], 99.50th=[21627], 99.90th=[22676], 99.95th=[23987], 00:18:32.045 | 99.99th=[26346] 00:18:32.045 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:18:32.045 slat (usec): min=5, max=40580, avg=96.97, stdev=824.07 00:18:32.045 clat (usec): min=5909, max=53250, avg=13144.52, stdev=7254.16 00:18:32.045 lat (usec): min=5919, max=53296, avg=13241.49, stdev=7305.69 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 7373], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[10814], 00:18:32.045 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:18:32.045 | 70.00th=[11600], 80.00th=[12518], 90.00th=[14877], 95.00th=[21627], 00:18:32.045 | 99.00th=[49546], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:18:32.045 | 99.99th=[53216] 00:18:32.045 bw ( KiB/s): min=18525, max=22472, per=33.09%, avg=20498.50, stdev=2790.95, samples=2 00:18:32.045 iops : min= 4631, max= 5618, avg=5124.50, stdev=697.91, samples=2 00:18:32.045 lat (msec) : 4=0.23%, 10=7.48%, 20=88.41%, 50=3.42%, 100=0.47% 00:18:32.045 cpu : usr=7.98%, sys=10.37%, ctx=411, majf=0, minf=1 00:18:32.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:32.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.045 issued rwts: total=4948,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.045 job1: (groupid=0, jobs=1): err= 0: pid=1263401: Tue Jul 16 00:02:06 2024 00:18:32.045 read: IOPS=2185, BW=8741KiB/s (8951kB/s)(8776KiB/1004msec) 00:18:32.045 slat (usec): min=4, max=17208, avg=184.01, stdev=1085.72 00:18:32.045 clat (usec): min=3793, max=86560, avg=20377.08, stdev=11760.80 00:18:32.045 lat (usec): min=3809, max=86577, avg=20561.09, stdev=11871.16 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 7111], 5.00th=[13042], 10.00th=[13698], 20.00th=[14091], 00:18:32.045 | 30.00th=[14484], 40.00th=[15139], 50.00th=[15926], 60.00th=[16909], 00:18:32.045 | 70.00th=[20055], 80.00th=[23200], 90.00th=[34866], 95.00th=[44303], 00:18:32.045 | 99.00th=[71828], 99.50th=[79168], 99.90th=[86508], 99.95th=[86508], 00:18:32.045 | 99.99th=[86508] 00:18:32.045 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:18:32.045 slat (usec): min=4, max=9348, avg=216.41, stdev=986.75 00:18:32.045 clat (usec): min=1528, max=93667, avg=32307.55, stdev=24395.69 00:18:32.045 lat (usec): min=1545, max=93680, avg=32523.96, stdev=24560.36 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 7308], 5.00th=[ 8291], 10.00th=[10028], 20.00th=[11863], 00:18:32.045 | 30.00th=[12911], 40.00th=[19530], 50.00th=[24773], 60.00th=[27395], 00:18:32.045 | 70.00th=[36963], 80.00th=[57934], 90.00th=[70779], 95.00th=[85459], 00:18:32.045 | 99.00th=[90702], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:18:32.045 | 99.99th=[93848] 00:18:32.045 bw ( KiB/s): min= 7248, max=13258, per=16.55%, avg=10253.00, stdev=4249.71, samples=2 00:18:32.045 iops : min= 1812, max= 3314, avg=2563.00, stdev=1062.07, samples=2 00:18:32.045 lat (msec) : 2=0.04%, 4=0.15%, 10=6.14%, 20=48.74%, 50=29.39% 00:18:32.045 lat (msec) : 100=15.54% 00:18:32.045 cpu : usr=4.09%, sys=4.29%, ctx=322, majf=0, minf=1 00:18:32.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:32.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.045 issued rwts: total=2194,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.045 job2: (groupid=0, jobs=1): err= 0: pid=1263407: Tue Jul 16 00:02:06 2024 00:18:32.045 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:18:32.045 slat (usec): min=2, max=27410, avg=196.79, stdev=1335.98 00:18:32.045 clat (usec): min=8266, max=70748, avg=25403.63, stdev=11947.35 00:18:32.045 lat (usec): min=8272, max=70765, avg=25600.43, stdev=12035.43 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[13698], 5.00th=[16712], 10.00th=[17171], 20.00th=[17695], 00:18:32.045 | 30.00th=[18220], 40.00th=[18744], 50.00th=[20055], 60.00th=[21103], 00:18:32.045 | 70.00th=[26608], 80.00th=[33817], 90.00th=[41157], 95.00th=[54264], 00:18:32.045 | 99.00th=[63701], 99.50th=[66323], 99.90th=[67634], 99.95th=[69731], 00:18:32.045 | 99.99th=[70779] 00:18:32.045 write: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1008msec); 0 zone resets 00:18:32.045 slat (usec): min=3, max=15153, avg=163.09, stdev=953.09 00:18:32.045 clat (usec): min=2288, max=60001, avg=21659.82, stdev=8495.56 00:18:32.045 lat (usec): min=2316, max=60008, avg=21822.91, stdev=8569.25 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[10945], 5.00th=[15008], 10.00th=[15401], 20.00th=[15664], 00:18:32.045 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17695], 60.00th=[20579], 00:18:32.045 | 70.00th=[23987], 80.00th=[26608], 90.00th=[31327], 95.00th=[41681], 00:18:32.045 | 99.00th=[54264], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:18:32.045 | 99.99th=[60031] 00:18:32.045 bw ( KiB/s): min= 9432, max=12768, per=17.92%, avg=11100.00, stdev=2358.91, samples=2 00:18:32.045 iops : min= 2358, max= 3192, avg=2775.00, stdev=589.73, samples=2 00:18:32.045 lat (msec) : 4=0.31%, 10=0.31%, 20=53.14%, 50=41.22%, 100=5.02% 00:18:32.045 cpu : usr=2.38%, sys=4.77%, ctx=224, majf=0, minf=1 00:18:32.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:32.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.045 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.045 job3: (groupid=0, jobs=1): err= 0: pid=1263415: Tue Jul 16 00:02:06 2024 00:18:32.045 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:18:32.045 slat (usec): min=4, max=6069, avg=100.14, stdev=525.74 00:18:32.045 clat (usec): min=9085, max=19685, avg=13331.62, stdev=1390.65 00:18:32.045 lat (usec): min=9101, max=19724, avg=13431.76, stdev=1414.50 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[11469], 20.00th=[12256], 00:18:32.045 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:18:32.045 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15139], 95.00th=[15533], 00:18:32.045 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19006], 99.95th=[19268], 00:18:32.045 | 99.99th=[19792] 00:18:32.045 write: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.6MiB/1001msec); 0 zone resets 00:18:32.045 slat (usec): min=5, max=4272, avg=96.06, stdev=474.28 00:18:32.045 clat (usec): min=524, max=19277, avg=12996.69, stdev=1554.73 00:18:32.045 lat (usec): min=544, max=19956, avg=13092.75, stdev=1557.99 00:18:32.045 clat percentiles (usec): 00:18:32.045 | 1.00th=[ 5538], 5.00th=[10159], 10.00th=[10814], 20.00th=[12649], 00:18:32.045 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:18:32.045 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14222], 95.00th=[14484], 00:18:32.045 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19268], 99.95th=[19268], 00:18:32.045 | 99.99th=[19268] 00:18:32.045 bw ( KiB/s): min=19136, max=20521, per=32.01%, avg=19828.50, stdev=979.34, samples=2 00:18:32.045 iops : min= 4786, max= 5130, avg=4958.00, stdev=243.24, samples=2 00:18:32.045 lat (usec) : 750=0.02% 00:18:32.045 lat (msec) : 10=2.20%, 20=97.78% 00:18:32.045 cpu : usr=6.80%, sys=11.30%, ctx=420, majf=0, minf=1 00:18:32.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:32.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.045 issued rwts: total=4608,5028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.045 00:18:32.045 Run status group 0 (all jobs): 00:18:32.045 READ: bw=55.5MiB/s (58.1MB/s), 8741KiB/s-19.3MiB/s (8951kB/s-20.2MB/s), io=55.9MiB (58.6MB), run=1001-1008msec 00:18:32.045 WRITE: bw=60.5MiB/s (63.4MB/s), 9.96MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=61.0MiB (63.9MB), run=1001-1008msec 00:18:32.045 00:18:32.045 Disk stats (read/write): 00:18:32.045 nvme0n1: ios=3981/4096, merge=0/0, ticks=23526/23557, in_queue=47083, util=96.89% 00:18:32.045 nvme0n2: ios=2073/2103, merge=0/0, ticks=40808/58065, in_queue=98873, util=97.54% 00:18:32.045 nvme0n3: ios=2099/2297, merge=0/0, ticks=23939/23439, in_queue=47378, util=97.55% 00:18:32.045 nvme0n4: ios=3897/4096, merge=0/0, ticks=18097/17892, in_queue=35989, util=96.76% 00:18:32.045 00:02:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:32.045 00:02:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1263586 00:18:32.045 00:02:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:32.045 00:02:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:32.045 [global] 00:18:32.045 thread=1 00:18:32.045 invalidate=1 00:18:32.045 rw=read 00:18:32.045 time_based=1 00:18:32.045 runtime=10 00:18:32.045 ioengine=libaio 00:18:32.045 direct=1 00:18:32.045 bs=4096 00:18:32.045 iodepth=1 00:18:32.045 norandommap=1 00:18:32.045 numjobs=1 00:18:32.045 00:18:32.045 [job0] 00:18:32.045 filename=/dev/nvme0n1 00:18:32.045 [job1] 00:18:32.045 filename=/dev/nvme0n2 00:18:32.045 [job2] 00:18:32.045 filename=/dev/nvme0n3 00:18:32.045 [job3] 00:18:32.045 filename=/dev/nvme0n4 00:18:32.045 Could not set queue depth (nvme0n1) 00:18:32.045 Could not set queue depth (nvme0n2) 00:18:32.045 Could not set queue depth (nvme0n3) 00:18:32.045 Could not set queue depth (nvme0n4) 00:18:32.301 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.301 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.301 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.301 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.301 fio-3.35 00:18:32.301 Starting 4 threads 00:18:35.572 00:02:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:35.572 00:02:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:35.572 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=25911296, buflen=4096 00:18:35.572 fio: pid=1263671, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:35.572 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.572 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:35.572 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6713344, buflen=4096 00:18:35.572 fio: pid=1263670, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:35.828 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24686592, buflen=4096 00:18:35.828 fio: pid=1263668, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:35.828 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:35.828 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:36.086 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=30892032, buflen=4096 00:18:36.086 fio: pid=1263669, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:36.344 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:36.344 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:36.344 00:18:36.344 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1263668: Tue Jul 16 00:02:10 2024 00:18:36.344 read: IOPS=1712, BW=6847KiB/s (7011kB/s)(23.5MiB/3521msec) 00:18:36.344 slat (usec): min=4, max=16657, avg=14.84, stdev=275.72 00:18:36.344 clat (usec): min=194, max=42103, avg=563.38, stdev=2939.74 00:18:36.344 lat (usec): min=199, max=49020, avg=578.22, stdev=2969.40 00:18:36.344 clat percentiles (usec): 00:18:36.344 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:18:36.344 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 310], 60.00th=[ 396], 00:18:36.344 | 70.00th=[ 429], 80.00th=[ 494], 90.00th=[ 506], 95.00th=[ 510], 00:18:36.344 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:18:36.344 | 99.99th=[42206] 00:18:36.344 bw ( KiB/s): min= 696, max=12288, per=31.78%, avg=7186.67, stdev=4603.19, samples=6 00:18:36.344 iops : min= 174, max= 3072, avg=1796.67, stdev=1150.80, samples=6 00:18:36.344 lat (usec) : 250=27.75%, 500=57.68%, 750=13.98% 00:18:36.344 lat (msec) : 4=0.02%, 10=0.02%, 50=0.53% 00:18:36.344 cpu : usr=0.82%, sys=1.73%, ctx=6032, majf=0, minf=1 00:18:36.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 issued rwts: total=6028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.344 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1263669: Tue Jul 16 00:02:10 2024 00:18:36.344 read: IOPS=1980, BW=7920KiB/s (8110kB/s)(29.5MiB/3809msec) 00:18:36.344 slat (usec): min=4, max=13509, avg=15.88, stdev=288.22 00:18:36.344 clat (usec): min=185, max=41985, avg=485.96, stdev=3144.51 00:18:36.344 lat (usec): min=190, max=42002, avg=501.85, stdev=3159.42 00:18:36.344 clat percentiles (usec): 00:18:36.344 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:18:36.344 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 237], 00:18:36.344 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 330], 00:18:36.344 | 99.00th=[ 469], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:36.344 | 99.99th=[42206] 00:18:36.344 bw ( KiB/s): min= 96, max=16528, per=32.35%, avg=7315.57, stdev=7801.19, samples=7 00:18:36.344 iops : min= 24, max= 4132, avg=1828.86, stdev=1950.26, samples=7 00:18:36.344 lat (usec) : 250=77.26%, 500=21.83%, 750=0.21%, 1000=0.01% 00:18:36.344 lat (msec) : 2=0.01%, 10=0.05%, 50=0.60% 00:18:36.344 cpu : usr=0.92%, sys=1.89%, ctx=7553, majf=0, minf=1 00:18:36.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 issued rwts: total=7543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.344 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1263670: Tue Jul 16 00:02:10 2024 00:18:36.344 read: IOPS=502, BW=2010KiB/s (2059kB/s)(6556KiB/3261msec) 00:18:36.344 slat (nsec): min=5834, max=45858, avg=12446.60, stdev=4977.36 00:18:36.344 clat (usec): min=234, max=42061, avg=1956.04, stdev=7898.99 00:18:36.344 lat (usec): min=240, max=42094, avg=1968.48, stdev=7901.31 00:18:36.344 clat percentiles (usec): 00:18:36.344 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 269], 00:18:36.344 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 367], 00:18:36.344 | 70.00th=[ 437], 80.00th=[ 482], 90.00th=[ 510], 95.00th=[ 553], 00:18:36.344 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:36.344 | 99.99th=[42206] 00:18:36.344 bw ( KiB/s): min= 104, max=10776, per=9.63%, avg=2177.33, stdev=4259.25, samples=6 00:18:36.344 iops : min= 26, max= 2694, avg=544.33, stdev=1064.81, samples=6 00:18:36.344 lat (usec) : 250=8.17%, 500=78.54%, 750=9.27% 00:18:36.344 lat (msec) : 4=0.06%, 50=3.90% 00:18:36.344 cpu : usr=0.31%, sys=0.67%, ctx=1642, majf=0, minf=1 00:18:36.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.344 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1263671: Tue Jul 16 00:02:10 2024 00:18:36.344 read: IOPS=2158, BW=8633KiB/s (8840kB/s)(24.7MiB/2931msec) 00:18:36.344 slat (nsec): min=5082, max=40600, avg=10741.60, stdev=4541.98 00:18:36.344 clat (usec): min=211, max=42342, avg=446.34, stdev=1944.09 00:18:36.344 lat (usec): min=220, max=42357, avg=457.08, stdev=1944.73 00:18:36.344 clat percentiles (usec): 00:18:36.344 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 302], 00:18:36.344 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:18:36.344 | 70.00th=[ 355], 80.00th=[ 445], 90.00th=[ 502], 95.00th=[ 506], 00:18:36.344 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[42206], 99.95th=[42206], 00:18:36.344 | 99.99th=[42206] 00:18:36.344 bw ( KiB/s): min= 3000, max=11224, per=35.62%, avg=8056.00, stdev=3633.31, samples=5 00:18:36.344 iops : min= 750, max= 2806, avg=2014.00, stdev=908.33, samples=5 00:18:36.344 lat (usec) : 250=7.54%, 500=80.99%, 750=11.21% 00:18:36.344 lat (msec) : 4=0.02%, 10=0.02%, 50=0.22% 00:18:36.344 cpu : usr=1.64%, sys=3.24%, ctx=6328, majf=0, minf=1 00:18:36.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.344 issued rwts: total=6327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.344 00:18:36.344 Run status group 0 (all jobs): 00:18:36.344 READ: bw=22.1MiB/s (23.2MB/s), 2010KiB/s-8633KiB/s (2059kB/s-8840kB/s), io=84.1MiB (88.2MB), run=2931-3809msec 00:18:36.344 00:18:36.344 Disk stats (read/write): 00:18:36.344 nvme0n1: ios=5672/0, merge=0/0, ticks=3223/0, in_queue=3223, util=95.02% 00:18:36.344 nvme0n2: ios=6781/0, merge=0/0, ticks=3443/0, in_queue=3443, util=95.58% 00:18:36.344 nvme0n3: ios=1680/0, merge=0/0, ticks=3868/0, in_queue=3868, util=98.94% 00:18:36.344 nvme0n4: ios=6118/0, merge=0/0, ticks=2721/0, in_queue=2721, util=96.75% 00:18:36.602 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:36.602 00:02:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:36.860 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:36.860 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:37.118 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:37.118 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:37.376 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:37.376 00:02:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1263586 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:37.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:37.634 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:37.892 nvmf hotplug test: fio failed as expected 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.892 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.150 rmmod nvme_tcp 00:18:38.151 rmmod nvme_fabrics 00:18:38.151 rmmod nvme_keyring 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1262074 ']' 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1262074 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1262074 ']' 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1262074 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1262074 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1262074' 00:18:38.151 killing process with pid 1262074 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1262074 00:18:38.151 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1262074 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.409 00:02:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.314 00:02:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:40.314 00:18:40.314 real 0m23.040s 00:18:40.314 user 1m19.907s 00:18:40.314 sys 0m7.261s 00:18:40.314 00:02:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:40.314 00:02:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.314 ************************************ 00:18:40.314 END TEST nvmf_fio_target 00:18:40.314 ************************************ 00:18:40.314 00:02:14 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:40.314 00:02:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:40.314 00:02:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:40.314 00:02:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.314 ************************************ 00:18:40.314 START TEST nvmf_bdevio 00:18:40.314 ************************************ 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:40.314 * Looking for test storage... 00:18:40.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:40.314 00:02:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:42.222 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:42.222 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:42.222 Found net devices under 0000:08:00.0: cvl_0_0 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:42.222 Found net devices under 0000:08:00.1: cvl_0_1 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:18:42.222 00:18:42.222 --- 10.0.0.2 ping statistics --- 00:18:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.222 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:18:42.222 00:18:42.222 --- 10.0.0.1 ping statistics --- 00:18:42.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.222 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1265643 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:42.222 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1265643 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1265643 ']' 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.223 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.223 [2024-07-16 00:02:16.711210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:42.223 [2024-07-16 00:02:16.711317] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.481 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.481 [2024-07-16 00:02:16.780559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.481 [2024-07-16 00:02:16.872089] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.481 [2024-07-16 00:02:16.872157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.481 [2024-07-16 00:02:16.872175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.481 [2024-07-16 00:02:16.872188] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.481 [2024-07-16 00:02:16.872200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.481 [2024-07-16 00:02:16.872291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:42.481 [2024-07-16 00:02:16.872373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:42.481 [2024-07-16 00:02:16.872455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:42.481 [2024-07-16 00:02:16.872459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.481 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:42.481 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:18:42.481 00:02:16 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.481 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.481 00:02:16 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 [2024-07-16 00:02:17.019820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 Malloc0 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.739 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 [2024-07-16 00:02:17.070085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:42.740 { 00:18:42.740 "params": { 00:18:42.740 "name": "Nvme$subsystem", 00:18:42.740 "trtype": "$TEST_TRANSPORT", 00:18:42.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:42.740 "adrfam": "ipv4", 00:18:42.740 "trsvcid": "$NVMF_PORT", 00:18:42.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:42.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:42.740 "hdgst": ${hdgst:-false}, 00:18:42.740 "ddgst": ${ddgst:-false} 00:18:42.740 }, 00:18:42.740 "method": "bdev_nvme_attach_controller" 00:18:42.740 } 00:18:42.740 EOF 00:18:42.740 )") 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:42.740 00:02:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:42.740 "params": { 00:18:42.740 "name": "Nvme1", 00:18:42.740 "trtype": "tcp", 00:18:42.740 "traddr": "10.0.0.2", 00:18:42.740 "adrfam": "ipv4", 00:18:42.740 "trsvcid": "4420", 00:18:42.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.740 "hdgst": false, 00:18:42.740 "ddgst": false 00:18:42.740 }, 00:18:42.740 "method": "bdev_nvme_attach_controller" 00:18:42.740 }' 00:18:42.740 [2024-07-16 00:02:17.118321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:42.740 [2024-07-16 00:02:17.118412] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265673 ] 00:18:42.740 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.740 [2024-07-16 00:02:17.179293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.998 [2024-07-16 00:02:17.268996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.998 [2024-07-16 00:02:17.269047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.998 [2024-07-16 00:02:17.269051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.271 I/O targets: 00:18:43.271 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:43.271 00:18:43.271 00:18:43.271 CUnit - A unit testing framework for C - Version 2.1-3 00:18:43.271 http://cunit.sourceforge.net/ 00:18:43.271 00:18:43.271 00:18:43.271 Suite: bdevio tests on: Nvme1n1 00:18:43.271 Test: blockdev write read block ...passed 00:18:43.271 Test: blockdev write zeroes read block ...passed 00:18:43.271 Test: blockdev write zeroes read no split ...passed 00:18:43.271 Test: blockdev write zeroes read split ...passed 00:18:43.271 Test: blockdev write zeroes read split partial ...passed 00:18:43.271 Test: blockdev reset ...[2024-07-16 00:02:17.709444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.271 [2024-07-16 00:02:17.709569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a47760 (9): Bad file descriptor 00:18:43.531 [2024-07-16 00:02:17.812353] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:43.531 passed 00:18:43.531 Test: blockdev write read 8 blocks ...passed 00:18:43.531 Test: blockdev write read size > 128k ...passed 00:18:43.531 Test: blockdev write read invalid size ...passed 00:18:43.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:43.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:43.531 Test: blockdev write read max offset ...passed 00:18:43.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:43.531 Test: blockdev writev readv 8 blocks ...passed 00:18:43.531 Test: blockdev writev readv 30 x 1block ...passed 00:18:43.789 Test: blockdev writev readv block ...passed 00:18:43.789 Test: blockdev writev readv size > 128k ...passed 00:18:43.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:43.789 Test: blockdev comparev and writev ...[2024-07-16 00:02:18.068686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.068727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.068755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.068774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.069159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.069185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.069210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.069227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.069605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.069629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.069654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.069671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.070020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.070045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.070069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:43.789 [2024-07-16 00:02:18.070094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:43.789 passed 00:18:43.789 Test: blockdev nvme passthru rw ...passed 00:18:43.789 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:02:18.154417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.789 [2024-07-16 00:02:18.154444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.154595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.789 [2024-07-16 00:02:18.154618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.154765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.789 [2024-07-16 00:02:18.154787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:43.789 [2024-07-16 00:02:18.154943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:43.789 [2024-07-16 00:02:18.154966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:43.789 passed 00:18:43.789 Test: blockdev nvme admin passthru ...passed 00:18:43.789 Test: blockdev copy ...passed 00:18:43.789 00:18:43.789 Run Summary: Type Total Ran Passed Failed Inactive 00:18:43.789 suites 1 1 n/a 0 0 00:18:43.789 tests 23 23 23 0 0 00:18:43.789 asserts 152 152 152 0 n/a 00:18:43.789 00:18:43.789 Elapsed time = 1.239 seconds 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.047 rmmod nvme_tcp 00:18:44.047 rmmod nvme_fabrics 00:18:44.047 rmmod nvme_keyring 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1265643 ']' 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1265643 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1265643 ']' 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1265643 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1265643 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1265643' 00:18:44.047 killing process with pid 1265643 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1265643 00:18:44.047 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1265643 00:18:44.305 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.306 00:02:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.211 00:02:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.211 00:18:46.211 real 0m5.933s 00:18:46.211 user 0m10.135s 00:18:46.211 sys 0m1.841s 00:18:46.211 00:02:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.211 00:02:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:46.211 ************************************ 00:18:46.211 END TEST nvmf_bdevio 00:18:46.211 ************************************ 00:18:46.211 00:02:20 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:46.211 00:02:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:46.211 00:02:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:46.211 00:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.211 ************************************ 00:18:46.211 START TEST nvmf_auth_target 00:18:46.211 ************************************ 00:18:46.211 00:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:46.470 * Looking for test storage... 00:18:46.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.470 00:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:48.426 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:48.426 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:48.426 Found net devices under 0000:08:00.0: cvl_0_0 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:48.426 Found net devices under 0000:08:00.1: cvl_0_1 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:18:48.426 00:18:48.426 --- 10.0.0.2 ping statistics --- 00:18:48.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.426 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:18:48.426 00:18:48.426 --- 10.0.0.1 ping statistics --- 00:18:48.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.426 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.426 00:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1267248 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1267248 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1267248 ']' 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.427 00:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1267322 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fcc9f1aee1ba89e8399448c264f8db29d27bce6ce9209cb 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.15v 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fcc9f1aee1ba89e8399448c264f8db29d27bce6ce9209cb 0 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fcc9f1aee1ba89e8399448c264f8db29d27bce6ce9209cb 0 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fcc9f1aee1ba89e8399448c264f8db29d27bce6ce9209cb 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.683 00:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.15v 00:18:48.683 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.15v 00:18:48.683 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.15v 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=534132266edd1906a1ae3b786826181cbd4894a9c910f08ebb1ca3e55d5f96ed 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Vdr 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 534132266edd1906a1ae3b786826181cbd4894a9c910f08ebb1ca3e55d5f96ed 3 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 534132266edd1906a1ae3b786826181cbd4894a9c910f08ebb1ca3e55d5f96ed 3 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=534132266edd1906a1ae3b786826181cbd4894a9c910f08ebb1ca3e55d5f96ed 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Vdr 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Vdr 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Vdr 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=66f859c49ea16494650104486d786e44 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.S5y 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 66f859c49ea16494650104486d786e44 1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 66f859c49ea16494650104486d786e44 1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=66f859c49ea16494650104486d786e44 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.S5y 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.S5y 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.S5y 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5ef77953c5687f323d80ea89ebb76003003fc3d40e6e24da 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Wjz 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5ef77953c5687f323d80ea89ebb76003003fc3d40e6e24da 2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5ef77953c5687f323d80ea89ebb76003003fc3d40e6e24da 2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5ef77953c5687f323d80ea89ebb76003003fc3d40e6e24da 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Wjz 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Wjz 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Wjz 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75fd961effea58821395d3622f95aaff3817b38dcc2d7e71 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PFi 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75fd961effea58821395d3622f95aaff3817b38dcc2d7e71 2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75fd961effea58821395d3622f95aaff3817b38dcc2d7e71 2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75fd961effea58821395d3622f95aaff3817b38dcc2d7e71 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:48.684 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PFi 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PFi 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.PFi 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bd42a93be5ea77ce7c386b3c4e81ca3d 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TS9 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bd42a93be5ea77ce7c386b3c4e81ca3d 1 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bd42a93be5ea77ce7c386b3c4e81ca3d 1 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bd42a93be5ea77ce7c386b3c4e81ca3d 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TS9 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TS9 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.TS9 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d751497a8048a489a23836a48cf8f1b4070765825b919020208952b4076c7ab 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JPO 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d751497a8048a489a23836a48cf8f1b4070765825b919020208952b4076c7ab 3 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d751497a8048a489a23836a48cf8f1b4070765825b919020208952b4076c7ab 3 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d751497a8048a489a23836a48cf8f1b4070765825b919020208952b4076c7ab 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JPO 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JPO 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.JPO 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1267248 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1267248 ']' 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.941 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1267322 /var/tmp/host.sock 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1267322 ']' 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:49.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:49.199 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.15v 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.455 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.712 00:02:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.712 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.15v 00:18:49.712 00:02:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.15v 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Vdr ]] 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vdr 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vdr 00:18:49.969 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vdr 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.S5y 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.S5y 00:18:50.226 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.S5y 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Wjz ]] 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wjz 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wjz 00:18:50.483 00:02:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wjz 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PFi 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.PFi 00:18:50.740 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.PFi 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.TS9 ]] 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TS9 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TS9 00:18:50.997 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TS9 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JPO 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JPO 00:18:51.253 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JPO 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.511 00:02:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.768 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.026 00:18:52.026 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.026 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.026 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.283 { 00:18:52.283 "cntlid": 1, 00:18:52.283 "qid": 0, 00:18:52.283 "state": "enabled", 00:18:52.283 "listen_address": { 00:18:52.283 "trtype": "TCP", 00:18:52.283 "adrfam": "IPv4", 00:18:52.283 "traddr": "10.0.0.2", 00:18:52.283 "trsvcid": "4420" 00:18:52.283 }, 00:18:52.283 "peer_address": { 00:18:52.283 "trtype": "TCP", 00:18:52.283 "adrfam": "IPv4", 00:18:52.283 "traddr": "10.0.0.1", 00:18:52.283 "trsvcid": "52978" 00:18:52.283 }, 00:18:52.283 "auth": { 00:18:52.283 "state": "completed", 00:18:52.283 "digest": "sha256", 00:18:52.283 "dhgroup": "null" 00:18:52.283 } 00:18:52.283 } 00:18:52.283 ]' 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.283 00:02:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.540 00:02:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.913 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.171 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.429 00:18:54.429 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.429 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.429 00:02:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.687 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.945 { 00:18:54.945 "cntlid": 3, 00:18:54.945 "qid": 0, 00:18:54.945 "state": "enabled", 00:18:54.945 "listen_address": { 00:18:54.945 "trtype": "TCP", 00:18:54.945 "adrfam": "IPv4", 00:18:54.945 "traddr": "10.0.0.2", 00:18:54.945 "trsvcid": "4420" 00:18:54.945 }, 00:18:54.945 "peer_address": { 00:18:54.945 "trtype": "TCP", 00:18:54.945 "adrfam": "IPv4", 00:18:54.945 "traddr": "10.0.0.1", 00:18:54.945 "trsvcid": "53000" 00:18:54.945 }, 00:18:54.945 "auth": { 00:18:54.945 "state": "completed", 00:18:54.945 "digest": "sha256", 00:18:54.945 "dhgroup": "null" 00:18:54.945 } 00:18:54.945 } 00:18:54.945 ]' 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.945 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.203 00:02:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.575 00:02:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.575 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.140 00:18:57.140 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.140 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.140 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.398 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.398 { 00:18:57.398 "cntlid": 5, 00:18:57.398 "qid": 0, 00:18:57.398 "state": "enabled", 00:18:57.398 "listen_address": { 00:18:57.398 "trtype": "TCP", 00:18:57.399 "adrfam": "IPv4", 00:18:57.399 "traddr": "10.0.0.2", 00:18:57.399 "trsvcid": "4420" 00:18:57.399 }, 00:18:57.399 "peer_address": { 00:18:57.399 "trtype": "TCP", 00:18:57.399 "adrfam": "IPv4", 00:18:57.399 "traddr": "10.0.0.1", 00:18:57.399 "trsvcid": "53028" 00:18:57.399 }, 00:18:57.399 "auth": { 00:18:57.399 "state": "completed", 00:18:57.399 "digest": "sha256", 00:18:57.399 "dhgroup": "null" 00:18:57.399 } 00:18:57.399 } 00:18:57.399 ]' 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.399 00:02:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.656 00:02:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.028 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.286 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.287 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.544 00:18:59.544 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.544 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.544 00:02:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.802 { 00:18:59.802 "cntlid": 7, 00:18:59.802 "qid": 0, 00:18:59.802 "state": "enabled", 00:18:59.802 "listen_address": { 00:18:59.802 "trtype": "TCP", 00:18:59.802 "adrfam": "IPv4", 00:18:59.802 "traddr": "10.0.0.2", 00:18:59.802 "trsvcid": "4420" 00:18:59.802 }, 00:18:59.802 "peer_address": { 00:18:59.802 "trtype": "TCP", 00:18:59.802 "adrfam": "IPv4", 00:18:59.802 "traddr": "10.0.0.1", 00:18:59.802 "trsvcid": "53058" 00:18:59.802 }, 00:18:59.802 "auth": { 00:18:59.802 "state": "completed", 00:18:59.802 "digest": "sha256", 00:18:59.802 "dhgroup": "null" 00:18:59.802 } 00:18:59.802 } 00:18:59.802 ]' 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.802 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.060 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.060 00:02:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.432 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.433 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.433 00:02:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.691 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.948 00:19:01.948 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.948 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.948 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.513 { 00:19:02.513 "cntlid": 9, 00:19:02.513 "qid": 0, 00:19:02.513 "state": "enabled", 00:19:02.513 "listen_address": { 00:19:02.513 "trtype": "TCP", 00:19:02.513 "adrfam": "IPv4", 00:19:02.513 "traddr": "10.0.0.2", 00:19:02.513 "trsvcid": "4420" 00:19:02.513 }, 00:19:02.513 "peer_address": { 00:19:02.513 "trtype": "TCP", 00:19:02.513 "adrfam": "IPv4", 00:19:02.513 "traddr": "10.0.0.1", 00:19:02.513 "trsvcid": "43272" 00:19:02.513 }, 00:19:02.513 "auth": { 00:19:02.513 "state": "completed", 00:19:02.513 "digest": "sha256", 00:19:02.513 "dhgroup": "ffdhe2048" 00:19:02.513 } 00:19:02.513 } 00:19:02.513 ]' 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.513 00:02:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.771 00:02:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.142 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.706 00:19:04.706 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.706 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.706 00:02:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.004 { 00:19:05.004 "cntlid": 11, 00:19:05.004 "qid": 0, 00:19:05.004 "state": "enabled", 00:19:05.004 "listen_address": { 00:19:05.004 "trtype": "TCP", 00:19:05.004 "adrfam": "IPv4", 00:19:05.004 "traddr": "10.0.0.2", 00:19:05.004 "trsvcid": "4420" 00:19:05.004 }, 00:19:05.004 "peer_address": { 00:19:05.004 "trtype": "TCP", 00:19:05.004 "adrfam": "IPv4", 00:19:05.004 "traddr": "10.0.0.1", 00:19:05.004 "trsvcid": "43312" 00:19:05.004 }, 00:19:05.004 "auth": { 00:19:05.004 "state": "completed", 00:19:05.004 "digest": "sha256", 00:19:05.004 "dhgroup": "ffdhe2048" 00:19:05.004 } 00:19:05.004 } 00:19:05.004 ]' 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.004 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.260 00:02:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.630 00:02:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.887 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.888 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.145 00:19:07.145 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.145 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.145 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.402 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.402 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.402 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.402 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.659 00:02:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.659 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.659 { 00:19:07.659 "cntlid": 13, 00:19:07.660 "qid": 0, 00:19:07.660 "state": "enabled", 00:19:07.660 "listen_address": { 00:19:07.660 "trtype": "TCP", 00:19:07.660 "adrfam": "IPv4", 00:19:07.660 "traddr": "10.0.0.2", 00:19:07.660 "trsvcid": "4420" 00:19:07.660 }, 00:19:07.660 "peer_address": { 00:19:07.660 "trtype": "TCP", 00:19:07.660 "adrfam": "IPv4", 00:19:07.660 "traddr": "10.0.0.1", 00:19:07.660 "trsvcid": "43334" 00:19:07.660 }, 00:19:07.660 "auth": { 00:19:07.660 "state": "completed", 00:19:07.660 "digest": "sha256", 00:19:07.660 "dhgroup": "ffdhe2048" 00:19:07.660 } 00:19:07.660 } 00:19:07.660 ]' 00:19:07.660 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.660 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.660 00:02:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.660 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.660 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.660 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.660 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.660 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.916 00:02:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.284 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.541 00:02:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.798 00:19:09.798 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.798 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.798 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.055 { 00:19:10.055 "cntlid": 15, 00:19:10.055 "qid": 0, 00:19:10.055 "state": "enabled", 00:19:10.055 "listen_address": { 00:19:10.055 "trtype": "TCP", 00:19:10.055 "adrfam": "IPv4", 00:19:10.055 "traddr": "10.0.0.2", 00:19:10.055 "trsvcid": "4420" 00:19:10.055 }, 00:19:10.055 "peer_address": { 00:19:10.055 "trtype": "TCP", 00:19:10.055 "adrfam": "IPv4", 00:19:10.055 "traddr": "10.0.0.1", 00:19:10.055 "trsvcid": "43368" 00:19:10.055 }, 00:19:10.055 "auth": { 00:19:10.055 "state": "completed", 00:19:10.055 "digest": "sha256", 00:19:10.055 "dhgroup": "ffdhe2048" 00:19:10.055 } 00:19:10.055 } 00:19:10.055 ]' 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.055 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.312 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.312 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.312 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.312 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.312 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.569 00:02:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.943 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.509 00:19:12.509 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.509 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.509 00:02:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.766 { 00:19:12.766 "cntlid": 17, 00:19:12.766 "qid": 0, 00:19:12.766 "state": "enabled", 00:19:12.766 "listen_address": { 00:19:12.766 "trtype": "TCP", 00:19:12.766 "adrfam": "IPv4", 00:19:12.766 "traddr": "10.0.0.2", 00:19:12.766 "trsvcid": "4420" 00:19:12.766 }, 00:19:12.766 "peer_address": { 00:19:12.766 "trtype": "TCP", 00:19:12.766 "adrfam": "IPv4", 00:19:12.766 "traddr": "10.0.0.1", 00:19:12.766 "trsvcid": "57510" 00:19:12.766 }, 00:19:12.766 "auth": { 00:19:12.766 "state": "completed", 00:19:12.766 "digest": "sha256", 00:19:12.766 "dhgroup": "ffdhe3072" 00:19:12.766 } 00:19:12.766 } 00:19:12.766 ]' 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.766 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.023 00:02:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.395 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.652 00:02:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.910 00:19:14.910 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.910 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.910 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.168 { 00:19:15.168 "cntlid": 19, 00:19:15.168 "qid": 0, 00:19:15.168 "state": "enabled", 00:19:15.168 "listen_address": { 00:19:15.168 "trtype": "TCP", 00:19:15.168 "adrfam": "IPv4", 00:19:15.168 "traddr": "10.0.0.2", 00:19:15.168 "trsvcid": "4420" 00:19:15.168 }, 00:19:15.168 "peer_address": { 00:19:15.168 "trtype": "TCP", 00:19:15.168 "adrfam": "IPv4", 00:19:15.168 "traddr": "10.0.0.1", 00:19:15.168 "trsvcid": "57534" 00:19:15.168 }, 00:19:15.168 "auth": { 00:19:15.168 "state": "completed", 00:19:15.168 "digest": "sha256", 00:19:15.168 "dhgroup": "ffdhe3072" 00:19:15.168 } 00:19:15.168 } 00:19:15.168 ]' 00:19:15.168 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.425 00:02:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.684 00:02:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.057 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.318 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.574 00:19:17.574 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.574 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.574 00:02:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.831 { 00:19:17.831 "cntlid": 21, 00:19:17.831 "qid": 0, 00:19:17.831 "state": "enabled", 00:19:17.831 "listen_address": { 00:19:17.831 "trtype": "TCP", 00:19:17.831 "adrfam": "IPv4", 00:19:17.831 "traddr": "10.0.0.2", 00:19:17.831 "trsvcid": "4420" 00:19:17.831 }, 00:19:17.831 "peer_address": { 00:19:17.831 "trtype": "TCP", 00:19:17.831 "adrfam": "IPv4", 00:19:17.831 "traddr": "10.0.0.1", 00:19:17.831 "trsvcid": "57566" 00:19:17.831 }, 00:19:17.831 "auth": { 00:19:17.831 "state": "completed", 00:19:17.831 "digest": "sha256", 00:19:17.831 "dhgroup": "ffdhe3072" 00:19:17.831 } 00:19:17.831 } 00:19:17.831 ]' 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.831 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.089 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.089 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.089 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.089 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.089 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.346 00:02:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.719 00:02:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.719 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.284 00:19:20.284 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.284 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.284 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.542 { 00:19:20.542 "cntlid": 23, 00:19:20.542 "qid": 0, 00:19:20.542 "state": "enabled", 00:19:20.542 "listen_address": { 00:19:20.542 "trtype": "TCP", 00:19:20.542 "adrfam": "IPv4", 00:19:20.542 "traddr": "10.0.0.2", 00:19:20.542 "trsvcid": "4420" 00:19:20.542 }, 00:19:20.542 "peer_address": { 00:19:20.542 "trtype": "TCP", 00:19:20.542 "adrfam": "IPv4", 00:19:20.542 "traddr": "10.0.0.1", 00:19:20.542 "trsvcid": "57584" 00:19:20.542 }, 00:19:20.542 "auth": { 00:19:20.542 "state": "completed", 00:19:20.542 "digest": "sha256", 00:19:20.542 "dhgroup": "ffdhe3072" 00:19:20.542 } 00:19:20.542 } 00:19:20.542 ]' 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.542 00:02:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.799 00:02:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.174 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.175 00:02:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.746 00:19:22.746 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.746 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.746 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.004 { 00:19:23.004 "cntlid": 25, 00:19:23.004 "qid": 0, 00:19:23.004 "state": "enabled", 00:19:23.004 "listen_address": { 00:19:23.004 "trtype": "TCP", 00:19:23.004 "adrfam": "IPv4", 00:19:23.004 "traddr": "10.0.0.2", 00:19:23.004 "trsvcid": "4420" 00:19:23.004 }, 00:19:23.004 "peer_address": { 00:19:23.004 "trtype": "TCP", 00:19:23.004 "adrfam": "IPv4", 00:19:23.004 "traddr": "10.0.0.1", 00:19:23.004 "trsvcid": "46498" 00:19:23.004 }, 00:19:23.004 "auth": { 00:19:23.004 "state": "completed", 00:19:23.004 "digest": "sha256", 00:19:23.004 "dhgroup": "ffdhe4096" 00:19:23.004 } 00:19:23.004 } 00:19:23.004 ]' 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.004 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.262 00:02:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.634 00:02:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.892 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.150 00:19:25.408 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.408 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.408 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.666 { 00:19:25.666 "cntlid": 27, 00:19:25.666 "qid": 0, 00:19:25.666 "state": "enabled", 00:19:25.666 "listen_address": { 00:19:25.666 "trtype": "TCP", 00:19:25.666 "adrfam": "IPv4", 00:19:25.666 "traddr": "10.0.0.2", 00:19:25.666 "trsvcid": "4420" 00:19:25.666 }, 00:19:25.666 "peer_address": { 00:19:25.666 "trtype": "TCP", 00:19:25.666 "adrfam": "IPv4", 00:19:25.666 "traddr": "10.0.0.1", 00:19:25.666 "trsvcid": "46514" 00:19:25.666 }, 00:19:25.666 "auth": { 00:19:25.666 "state": "completed", 00:19:25.666 "digest": "sha256", 00:19:25.666 "dhgroup": "ffdhe4096" 00:19:25.666 } 00:19:25.666 } 00:19:25.666 ]' 00:19:25.666 00:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.666 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.924 00:03:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.296 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.554 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.555 00:03:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.120 00:19:28.120 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.120 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.120 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.378 { 00:19:28.378 "cntlid": 29, 00:19:28.378 "qid": 0, 00:19:28.378 "state": "enabled", 00:19:28.378 "listen_address": { 00:19:28.378 "trtype": "TCP", 00:19:28.378 "adrfam": "IPv4", 00:19:28.378 "traddr": "10.0.0.2", 00:19:28.378 "trsvcid": "4420" 00:19:28.378 }, 00:19:28.378 "peer_address": { 00:19:28.378 "trtype": "TCP", 00:19:28.378 "adrfam": "IPv4", 00:19:28.378 "traddr": "10.0.0.1", 00:19:28.378 "trsvcid": "46532" 00:19:28.378 }, 00:19:28.378 "auth": { 00:19:28.378 "state": "completed", 00:19:28.378 "digest": "sha256", 00:19:28.378 "dhgroup": "ffdhe4096" 00:19:28.378 } 00:19:28.378 } 00:19:28.378 ]' 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.378 00:03:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.636 00:03:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.007 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.264 00:03:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.829 00:19:30.829 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.829 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.829 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.087 { 00:19:31.087 "cntlid": 31, 00:19:31.087 "qid": 0, 00:19:31.087 "state": "enabled", 00:19:31.087 "listen_address": { 00:19:31.087 "trtype": "TCP", 00:19:31.087 "adrfam": "IPv4", 00:19:31.087 "traddr": "10.0.0.2", 00:19:31.087 "trsvcid": "4420" 00:19:31.087 }, 00:19:31.087 "peer_address": { 00:19:31.087 "trtype": "TCP", 00:19:31.087 "adrfam": "IPv4", 00:19:31.087 "traddr": "10.0.0.1", 00:19:31.087 "trsvcid": "46562" 00:19:31.087 }, 00:19:31.087 "auth": { 00:19:31.087 "state": "completed", 00:19:31.087 "digest": "sha256", 00:19:31.087 "dhgroup": "ffdhe4096" 00:19:31.087 } 00:19:31.087 } 00:19:31.087 ]' 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.087 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.345 00:03:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.716 00:03:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.973 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.544 00:19:33.544 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.544 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.544 00:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.800 { 00:19:33.800 "cntlid": 33, 00:19:33.800 "qid": 0, 00:19:33.800 "state": "enabled", 00:19:33.800 "listen_address": { 00:19:33.800 "trtype": "TCP", 00:19:33.800 "adrfam": "IPv4", 00:19:33.800 "traddr": "10.0.0.2", 00:19:33.800 "trsvcid": "4420" 00:19:33.800 }, 00:19:33.800 "peer_address": { 00:19:33.800 "trtype": "TCP", 00:19:33.800 "adrfam": "IPv4", 00:19:33.800 "traddr": "10.0.0.1", 00:19:33.800 "trsvcid": "34464" 00:19:33.800 }, 00:19:33.800 "auth": { 00:19:33.800 "state": "completed", 00:19:33.800 "digest": "sha256", 00:19:33.800 "dhgroup": "ffdhe6144" 00:19:33.800 } 00:19:33.800 } 00:19:33.800 ]' 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.800 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.056 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.056 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.056 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.312 00:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.683 00:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.683 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.248 00:19:36.248 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.248 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.248 00:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.813 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.813 { 00:19:36.813 "cntlid": 35, 00:19:36.813 "qid": 0, 00:19:36.813 "state": "enabled", 00:19:36.813 "listen_address": { 00:19:36.813 "trtype": "TCP", 00:19:36.813 "adrfam": "IPv4", 00:19:36.814 "traddr": "10.0.0.2", 00:19:36.814 "trsvcid": "4420" 00:19:36.814 }, 00:19:36.814 "peer_address": { 00:19:36.814 "trtype": "TCP", 00:19:36.814 "adrfam": "IPv4", 00:19:36.814 "traddr": "10.0.0.1", 00:19:36.814 "trsvcid": "34492" 00:19:36.814 }, 00:19:36.814 "auth": { 00:19:36.814 "state": "completed", 00:19:36.814 "digest": "sha256", 00:19:36.814 "dhgroup": "ffdhe6144" 00:19:36.814 } 00:19:36.814 } 00:19:36.814 ]' 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.814 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.071 00:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.442 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.699 00:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.265 00:19:39.265 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.265 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.265 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.522 { 00:19:39.522 "cntlid": 37, 00:19:39.522 "qid": 0, 00:19:39.522 "state": "enabled", 00:19:39.522 "listen_address": { 00:19:39.522 "trtype": "TCP", 00:19:39.522 "adrfam": "IPv4", 00:19:39.522 "traddr": "10.0.0.2", 00:19:39.522 "trsvcid": "4420" 00:19:39.522 }, 00:19:39.522 "peer_address": { 00:19:39.522 "trtype": "TCP", 00:19:39.522 "adrfam": "IPv4", 00:19:39.522 "traddr": "10.0.0.1", 00:19:39.522 "trsvcid": "34516" 00:19:39.522 }, 00:19:39.522 "auth": { 00:19:39.522 "state": "completed", 00:19:39.522 "digest": "sha256", 00:19:39.522 "dhgroup": "ffdhe6144" 00:19:39.522 } 00:19:39.522 } 00:19:39.522 ]' 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.522 00:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.522 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.522 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.793 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.793 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.793 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.064 00:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:19:40.998 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.255 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.513 00:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.077 00:19:42.077 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.077 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.077 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.335 { 00:19:42.335 "cntlid": 39, 00:19:42.335 "qid": 0, 00:19:42.335 "state": "enabled", 00:19:42.335 "listen_address": { 00:19:42.335 "trtype": "TCP", 00:19:42.335 "adrfam": "IPv4", 00:19:42.335 "traddr": "10.0.0.2", 00:19:42.335 "trsvcid": "4420" 00:19:42.335 }, 00:19:42.335 "peer_address": { 00:19:42.335 "trtype": "TCP", 00:19:42.335 "adrfam": "IPv4", 00:19:42.335 "traddr": "10.0.0.1", 00:19:42.335 "trsvcid": "33734" 00:19:42.335 }, 00:19:42.335 "auth": { 00:19:42.335 "state": "completed", 00:19:42.335 "digest": "sha256", 00:19:42.335 "dhgroup": "ffdhe6144" 00:19:42.335 } 00:19:42.335 } 00:19:42.335 ]' 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.335 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.593 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.593 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.593 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.593 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.593 00:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.850 00:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.222 00:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.594 00:19:45.594 00:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.594 00:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.594 00:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.594 { 00:19:45.594 "cntlid": 41, 00:19:45.594 "qid": 0, 00:19:45.594 "state": "enabled", 00:19:45.594 "listen_address": { 00:19:45.594 "trtype": "TCP", 00:19:45.594 "adrfam": "IPv4", 00:19:45.594 "traddr": "10.0.0.2", 00:19:45.594 "trsvcid": "4420" 00:19:45.594 }, 00:19:45.594 "peer_address": { 00:19:45.594 "trtype": "TCP", 00:19:45.594 "adrfam": "IPv4", 00:19:45.594 "traddr": "10.0.0.1", 00:19:45.594 "trsvcid": "33760" 00:19:45.594 }, 00:19:45.594 "auth": { 00:19:45.594 "state": "completed", 00:19:45.594 "digest": "sha256", 00:19:45.594 "dhgroup": "ffdhe8192" 00:19:45.594 } 00:19:45.594 } 00:19:45.594 ]' 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.594 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.852 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.852 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.852 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.852 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.852 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.109 00:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.481 00:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.852 00:19:48.852 00:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.852 00:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.852 00:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.852 { 00:19:48.852 "cntlid": 43, 00:19:48.852 "qid": 0, 00:19:48.852 "state": "enabled", 00:19:48.852 "listen_address": { 00:19:48.852 "trtype": "TCP", 00:19:48.852 "adrfam": "IPv4", 00:19:48.852 "traddr": "10.0.0.2", 00:19:48.852 "trsvcid": "4420" 00:19:48.852 }, 00:19:48.852 "peer_address": { 00:19:48.852 "trtype": "TCP", 00:19:48.852 "adrfam": "IPv4", 00:19:48.852 "traddr": "10.0.0.1", 00:19:48.852 "trsvcid": "33790" 00:19:48.852 }, 00:19:48.852 "auth": { 00:19:48.852 "state": "completed", 00:19:48.852 "digest": "sha256", 00:19:48.852 "dhgroup": "ffdhe8192" 00:19:48.852 } 00:19:48.852 } 00:19:48.852 ]' 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.852 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.109 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.109 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.109 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.109 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.109 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.367 00:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.738 00:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.738 00:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.110 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.110 { 00:19:52.110 "cntlid": 45, 00:19:52.110 "qid": 0, 00:19:52.110 "state": "enabled", 00:19:52.110 "listen_address": { 00:19:52.110 "trtype": "TCP", 00:19:52.110 "adrfam": "IPv4", 00:19:52.110 "traddr": "10.0.0.2", 00:19:52.110 "trsvcid": "4420" 00:19:52.110 }, 00:19:52.110 "peer_address": { 00:19:52.110 "trtype": "TCP", 00:19:52.110 "adrfam": "IPv4", 00:19:52.110 "traddr": "10.0.0.1", 00:19:52.110 "trsvcid": "33810" 00:19:52.110 }, 00:19:52.110 "auth": { 00:19:52.110 "state": "completed", 00:19:52.110 "digest": "sha256", 00:19:52.110 "dhgroup": "ffdhe8192" 00:19:52.110 } 00:19:52.110 } 00:19:52.110 ]' 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.110 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.366 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.366 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.366 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.366 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.366 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.622 00:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.995 00:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.369 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.369 { 00:19:55.369 "cntlid": 47, 00:19:55.369 "qid": 0, 00:19:55.369 "state": "enabled", 00:19:55.369 "listen_address": { 00:19:55.369 "trtype": "TCP", 00:19:55.369 "adrfam": "IPv4", 00:19:55.369 "traddr": "10.0.0.2", 00:19:55.369 "trsvcid": "4420" 00:19:55.369 }, 00:19:55.369 "peer_address": { 00:19:55.369 "trtype": "TCP", 00:19:55.369 "adrfam": "IPv4", 00:19:55.369 "traddr": "10.0.0.1", 00:19:55.369 "trsvcid": "42972" 00:19:55.369 }, 00:19:55.369 "auth": { 00:19:55.369 "state": "completed", 00:19:55.369 "digest": "sha256", 00:19:55.369 "dhgroup": "ffdhe8192" 00:19:55.369 } 00:19:55.369 } 00:19:55.369 ]' 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.369 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.627 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.627 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.627 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.627 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.627 00:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.885 00:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.257 00:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.822 00:19:57.822 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.822 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.822 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.085 { 00:19:58.085 "cntlid": 49, 00:19:58.085 "qid": 0, 00:19:58.085 "state": "enabled", 00:19:58.085 "listen_address": { 00:19:58.085 "trtype": "TCP", 00:19:58.085 "adrfam": "IPv4", 00:19:58.085 "traddr": "10.0.0.2", 00:19:58.085 "trsvcid": "4420" 00:19:58.085 }, 00:19:58.085 "peer_address": { 00:19:58.085 "trtype": "TCP", 00:19:58.085 "adrfam": "IPv4", 00:19:58.085 "traddr": "10.0.0.1", 00:19:58.085 "trsvcid": "42992" 00:19:58.085 }, 00:19:58.085 "auth": { 00:19:58.085 "state": "completed", 00:19:58.085 "digest": "sha384", 00:19:58.085 "dhgroup": "null" 00:19:58.085 } 00:19:58.085 } 00:19:58.085 ]' 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.085 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.384 00:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.774 00:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.774 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.032 00:20:00.032 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.032 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.032 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.597 { 00:20:00.597 "cntlid": 51, 00:20:00.597 "qid": 0, 00:20:00.597 "state": "enabled", 00:20:00.597 "listen_address": { 00:20:00.597 "trtype": "TCP", 00:20:00.597 "adrfam": "IPv4", 00:20:00.597 "traddr": "10.0.0.2", 00:20:00.597 "trsvcid": "4420" 00:20:00.597 }, 00:20:00.597 "peer_address": { 00:20:00.597 "trtype": "TCP", 00:20:00.597 "adrfam": "IPv4", 00:20:00.597 "traddr": "10.0.0.1", 00:20:00.597 "trsvcid": "43022" 00:20:00.597 }, 00:20:00.597 "auth": { 00:20:00.597 "state": "completed", 00:20:00.597 "digest": "sha384", 00:20:00.597 "dhgroup": "null" 00:20:00.597 } 00:20:00.597 } 00:20:00.597 ]' 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.597 00:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.855 00:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.228 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.229 00:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.795 00:20:02.795 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.795 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.795 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.052 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.052 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.053 { 00:20:03.053 "cntlid": 53, 00:20:03.053 "qid": 0, 00:20:03.053 "state": "enabled", 00:20:03.053 "listen_address": { 00:20:03.053 "trtype": "TCP", 00:20:03.053 "adrfam": "IPv4", 00:20:03.053 "traddr": "10.0.0.2", 00:20:03.053 "trsvcid": "4420" 00:20:03.053 }, 00:20:03.053 "peer_address": { 00:20:03.053 "trtype": "TCP", 00:20:03.053 "adrfam": "IPv4", 00:20:03.053 "traddr": "10.0.0.1", 00:20:03.053 "trsvcid": "53240" 00:20:03.053 }, 00:20:03.053 "auth": { 00:20:03.053 "state": "completed", 00:20:03.053 "digest": "sha384", 00:20:03.053 "dhgroup": "null" 00:20:03.053 } 00:20:03.053 } 00:20:03.053 ]' 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.053 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.311 00:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.685 00:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.943 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.200 00:20:05.200 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.200 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.200 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.458 { 00:20:05.458 "cntlid": 55, 00:20:05.458 "qid": 0, 00:20:05.458 "state": "enabled", 00:20:05.458 "listen_address": { 00:20:05.458 "trtype": "TCP", 00:20:05.458 "adrfam": "IPv4", 00:20:05.458 "traddr": "10.0.0.2", 00:20:05.458 "trsvcid": "4420" 00:20:05.458 }, 00:20:05.458 "peer_address": { 00:20:05.458 "trtype": "TCP", 00:20:05.458 "adrfam": "IPv4", 00:20:05.458 "traddr": "10.0.0.1", 00:20:05.458 "trsvcid": "53272" 00:20:05.458 }, 00:20:05.458 "auth": { 00:20:05.458 "state": "completed", 00:20:05.458 "digest": "sha384", 00:20:05.458 "dhgroup": "null" 00:20:05.458 } 00:20:05.458 } 00:20:05.458 ]' 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.458 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.715 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:05.715 00:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.715 00:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.715 00:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.715 00:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.970 00:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.341 00:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.907 00:20:07.907 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.907 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.907 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.165 { 00:20:08.165 "cntlid": 57, 00:20:08.165 "qid": 0, 00:20:08.165 "state": "enabled", 00:20:08.165 "listen_address": { 00:20:08.165 "trtype": "TCP", 00:20:08.165 "adrfam": "IPv4", 00:20:08.165 "traddr": "10.0.0.2", 00:20:08.165 "trsvcid": "4420" 00:20:08.165 }, 00:20:08.165 "peer_address": { 00:20:08.165 "trtype": "TCP", 00:20:08.165 "adrfam": "IPv4", 00:20:08.165 "traddr": "10.0.0.1", 00:20:08.165 "trsvcid": "53290" 00:20:08.165 }, 00:20:08.165 "auth": { 00:20:08.165 "state": "completed", 00:20:08.165 "digest": "sha384", 00:20:08.165 "dhgroup": "ffdhe2048" 00:20:08.165 } 00:20:08.165 } 00:20:08.165 ]' 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.165 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.423 00:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:09.796 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.053 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.309 00:20:10.309 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.309 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.309 00:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.871 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.872 { 00:20:10.872 "cntlid": 59, 00:20:10.872 "qid": 0, 00:20:10.872 "state": "enabled", 00:20:10.872 "listen_address": { 00:20:10.872 "trtype": "TCP", 00:20:10.872 "adrfam": "IPv4", 00:20:10.872 "traddr": "10.0.0.2", 00:20:10.872 "trsvcid": "4420" 00:20:10.872 }, 00:20:10.872 "peer_address": { 00:20:10.872 "trtype": "TCP", 00:20:10.872 "adrfam": "IPv4", 00:20:10.872 "traddr": "10.0.0.1", 00:20:10.872 "trsvcid": "53322" 00:20:10.872 }, 00:20:10.872 "auth": { 00:20:10.872 "state": "completed", 00:20:10.872 "digest": "sha384", 00:20:10.872 "dhgroup": "ffdhe2048" 00:20:10.872 } 00:20:10.872 } 00:20:10.872 ]' 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.872 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.129 00:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.499 00:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.499 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.063 00:20:13.063 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.063 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.063 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.320 { 00:20:13.320 "cntlid": 61, 00:20:13.320 "qid": 0, 00:20:13.320 "state": "enabled", 00:20:13.320 "listen_address": { 00:20:13.320 "trtype": "TCP", 00:20:13.320 "adrfam": "IPv4", 00:20:13.320 "traddr": "10.0.0.2", 00:20:13.320 "trsvcid": "4420" 00:20:13.320 }, 00:20:13.320 "peer_address": { 00:20:13.320 "trtype": "TCP", 00:20:13.320 "adrfam": "IPv4", 00:20:13.320 "traddr": "10.0.0.1", 00:20:13.320 "trsvcid": "55478" 00:20:13.320 }, 00:20:13.320 "auth": { 00:20:13.320 "state": "completed", 00:20:13.320 "digest": "sha384", 00:20:13.320 "dhgroup": "ffdhe2048" 00:20:13.320 } 00:20:13.320 } 00:20:13.320 ]' 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.320 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.578 00:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:14.952 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.209 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.467 00:20:15.467 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.467 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.467 00:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.752 { 00:20:15.752 "cntlid": 63, 00:20:15.752 "qid": 0, 00:20:15.752 "state": "enabled", 00:20:15.752 "listen_address": { 00:20:15.752 "trtype": "TCP", 00:20:15.752 "adrfam": "IPv4", 00:20:15.752 "traddr": "10.0.0.2", 00:20:15.752 "trsvcid": "4420" 00:20:15.752 }, 00:20:15.752 "peer_address": { 00:20:15.752 "trtype": "TCP", 00:20:15.752 "adrfam": "IPv4", 00:20:15.752 "traddr": "10.0.0.1", 00:20:15.752 "trsvcid": "55514" 00:20:15.752 }, 00:20:15.752 "auth": { 00:20:15.752 "state": "completed", 00:20:15.752 "digest": "sha384", 00:20:15.752 "dhgroup": "ffdhe2048" 00:20:15.752 } 00:20:15.752 } 00:20:15.752 ]' 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:15.752 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.010 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.010 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.010 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.269 00:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.639 00:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.639 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.203 00:20:18.203 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.203 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.203 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.461 { 00:20:18.461 "cntlid": 65, 00:20:18.461 "qid": 0, 00:20:18.461 "state": "enabled", 00:20:18.461 "listen_address": { 00:20:18.461 "trtype": "TCP", 00:20:18.461 "adrfam": "IPv4", 00:20:18.461 "traddr": "10.0.0.2", 00:20:18.461 "trsvcid": "4420" 00:20:18.461 }, 00:20:18.461 "peer_address": { 00:20:18.461 "trtype": "TCP", 00:20:18.461 "adrfam": "IPv4", 00:20:18.461 "traddr": "10.0.0.1", 00:20:18.461 "trsvcid": "55554" 00:20:18.461 }, 00:20:18.461 "auth": { 00:20:18.461 "state": "completed", 00:20:18.461 "digest": "sha384", 00:20:18.461 "dhgroup": "ffdhe3072" 00:20:18.461 } 00:20:18.461 } 00:20:18.461 ]' 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.461 00:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.719 00:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:20:20.089 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.090 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.347 00:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.604 00:20:20.604 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.604 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.604 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.862 { 00:20:20.862 "cntlid": 67, 00:20:20.862 "qid": 0, 00:20:20.862 "state": "enabled", 00:20:20.862 "listen_address": { 00:20:20.862 "trtype": "TCP", 00:20:20.862 "adrfam": "IPv4", 00:20:20.862 "traddr": "10.0.0.2", 00:20:20.862 "trsvcid": "4420" 00:20:20.862 }, 00:20:20.862 "peer_address": { 00:20:20.862 "trtype": "TCP", 00:20:20.862 "adrfam": "IPv4", 00:20:20.862 "traddr": "10.0.0.1", 00:20:20.862 "trsvcid": "55586" 00:20:20.862 }, 00:20:20.862 "auth": { 00:20:20.862 "state": "completed", 00:20:20.862 "digest": "sha384", 00:20:20.862 "dhgroup": "ffdhe3072" 00:20:20.862 } 00:20:20.862 } 00:20:20.862 ]' 00:20:20.862 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.118 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.376 00:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.747 00:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.747 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.311 00:20:23.311 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.311 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.311 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.567 { 00:20:23.567 "cntlid": 69, 00:20:23.567 "qid": 0, 00:20:23.567 "state": "enabled", 00:20:23.567 "listen_address": { 00:20:23.567 "trtype": "TCP", 00:20:23.567 "adrfam": "IPv4", 00:20:23.567 "traddr": "10.0.0.2", 00:20:23.567 "trsvcid": "4420" 00:20:23.567 }, 00:20:23.567 "peer_address": { 00:20:23.567 "trtype": "TCP", 00:20:23.567 "adrfam": "IPv4", 00:20:23.567 "traddr": "10.0.0.1", 00:20:23.567 "trsvcid": "52534" 00:20:23.567 }, 00:20:23.567 "auth": { 00:20:23.567 "state": "completed", 00:20:23.567 "digest": "sha384", 00:20:23.567 "dhgroup": "ffdhe3072" 00:20:23.567 } 00:20:23.567 } 00:20:23.567 ]' 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.567 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.568 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.568 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.568 00:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.568 00:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.568 00:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.568 00:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.823 00:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.193 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.451 00:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.708 00:20:25.708 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.708 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.708 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.964 { 00:20:25.964 "cntlid": 71, 00:20:25.964 "qid": 0, 00:20:25.964 "state": "enabled", 00:20:25.964 "listen_address": { 00:20:25.964 "trtype": "TCP", 00:20:25.964 "adrfam": "IPv4", 00:20:25.964 "traddr": "10.0.0.2", 00:20:25.964 "trsvcid": "4420" 00:20:25.964 }, 00:20:25.964 "peer_address": { 00:20:25.964 "trtype": "TCP", 00:20:25.964 "adrfam": "IPv4", 00:20:25.964 "traddr": "10.0.0.1", 00:20:25.964 "trsvcid": "52562" 00:20:25.964 }, 00:20:25.964 "auth": { 00:20:25.964 "state": "completed", 00:20:25.964 "digest": "sha384", 00:20:25.964 "dhgroup": "ffdhe3072" 00:20:25.964 } 00:20:25.964 } 00:20:25.964 ]' 00:20:25.964 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.221 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.221 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.221 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.221 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.222 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.222 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.222 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.478 00:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.846 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.411 00:20:28.411 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.411 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.411 00:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.669 { 00:20:28.669 "cntlid": 73, 00:20:28.669 "qid": 0, 00:20:28.669 "state": "enabled", 00:20:28.669 "listen_address": { 00:20:28.669 "trtype": "TCP", 00:20:28.669 "adrfam": "IPv4", 00:20:28.669 "traddr": "10.0.0.2", 00:20:28.669 "trsvcid": "4420" 00:20:28.669 }, 00:20:28.669 "peer_address": { 00:20:28.669 "trtype": "TCP", 00:20:28.669 "adrfam": "IPv4", 00:20:28.669 "traddr": "10.0.0.1", 00:20:28.669 "trsvcid": "52604" 00:20:28.669 }, 00:20:28.669 "auth": { 00:20:28.669 "state": "completed", 00:20:28.669 "digest": "sha384", 00:20:28.669 "dhgroup": "ffdhe4096" 00:20:28.669 } 00:20:28.669 } 00:20:28.669 ]' 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.669 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.925 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.925 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.925 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.925 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.925 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.182 00:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.555 00:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.555 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.121 00:20:31.121 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.121 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.121 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.379 { 00:20:31.379 "cntlid": 75, 00:20:31.379 "qid": 0, 00:20:31.379 "state": "enabled", 00:20:31.379 "listen_address": { 00:20:31.379 "trtype": "TCP", 00:20:31.379 "adrfam": "IPv4", 00:20:31.379 "traddr": "10.0.0.2", 00:20:31.379 "trsvcid": "4420" 00:20:31.379 }, 00:20:31.379 "peer_address": { 00:20:31.379 "trtype": "TCP", 00:20:31.379 "adrfam": "IPv4", 00:20:31.379 "traddr": "10.0.0.1", 00:20:31.379 "trsvcid": "52628" 00:20:31.379 }, 00:20:31.379 "auth": { 00:20:31.379 "state": "completed", 00:20:31.379 "digest": "sha384", 00:20:31.379 "dhgroup": "ffdhe4096" 00:20:31.379 } 00:20:31.379 } 00:20:31.379 ]' 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.379 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.637 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.637 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.637 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.637 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.637 00:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.895 00:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.304 00:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.870 00:20:33.870 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.870 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.870 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.128 { 00:20:34.128 "cntlid": 77, 00:20:34.128 "qid": 0, 00:20:34.128 "state": "enabled", 00:20:34.128 "listen_address": { 00:20:34.128 "trtype": "TCP", 00:20:34.128 "adrfam": "IPv4", 00:20:34.128 "traddr": "10.0.0.2", 00:20:34.128 "trsvcid": "4420" 00:20:34.128 }, 00:20:34.128 "peer_address": { 00:20:34.128 "trtype": "TCP", 00:20:34.128 "adrfam": "IPv4", 00:20:34.128 "traddr": "10.0.0.1", 00:20:34.128 "trsvcid": "38962" 00:20:34.128 }, 00:20:34.128 "auth": { 00:20:34.128 "state": "completed", 00:20:34.128 "digest": "sha384", 00:20:34.128 "dhgroup": "ffdhe4096" 00:20:34.128 } 00:20:34.128 } 00:20:34.128 ]' 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.128 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.694 00:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.625 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.881 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.443 00:20:36.443 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.443 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.443 00:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.700 { 00:20:36.700 "cntlid": 79, 00:20:36.700 "qid": 0, 00:20:36.700 "state": "enabled", 00:20:36.700 "listen_address": { 00:20:36.700 "trtype": "TCP", 00:20:36.700 "adrfam": "IPv4", 00:20:36.700 "traddr": "10.0.0.2", 00:20:36.700 "trsvcid": "4420" 00:20:36.700 }, 00:20:36.700 "peer_address": { 00:20:36.700 "trtype": "TCP", 00:20:36.700 "adrfam": "IPv4", 00:20:36.700 "traddr": "10.0.0.1", 00:20:36.700 "trsvcid": "38984" 00:20:36.700 }, 00:20:36.700 "auth": { 00:20:36.700 "state": "completed", 00:20:36.700 "digest": "sha384", 00:20:36.700 "dhgroup": "ffdhe4096" 00:20:36.700 } 00:20:36.700 } 00:20:36.700 ]' 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.700 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.957 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.957 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.957 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.957 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.957 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.215 00:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.585 00:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.585 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.149 00:20:39.405 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.405 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.405 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.663 { 00:20:39.663 "cntlid": 81, 00:20:39.663 "qid": 0, 00:20:39.663 "state": "enabled", 00:20:39.663 "listen_address": { 00:20:39.663 "trtype": "TCP", 00:20:39.663 "adrfam": "IPv4", 00:20:39.663 "traddr": "10.0.0.2", 00:20:39.663 "trsvcid": "4420" 00:20:39.663 }, 00:20:39.663 "peer_address": { 00:20:39.663 "trtype": "TCP", 00:20:39.663 "adrfam": "IPv4", 00:20:39.663 "traddr": "10.0.0.1", 00:20:39.663 "trsvcid": "39004" 00:20:39.663 }, 00:20:39.663 "auth": { 00:20:39.663 "state": "completed", 00:20:39.663 "digest": "sha384", 00:20:39.663 "dhgroup": "ffdhe6144" 00:20:39.663 } 00:20:39.663 } 00:20:39.663 ]' 00:20:39.663 00:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.663 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.920 00:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.293 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.550 00:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.116 00:20:42.116 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.116 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.116 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.374 { 00:20:42.374 "cntlid": 83, 00:20:42.374 "qid": 0, 00:20:42.374 "state": "enabled", 00:20:42.374 "listen_address": { 00:20:42.374 "trtype": "TCP", 00:20:42.374 "adrfam": "IPv4", 00:20:42.374 "traddr": "10.0.0.2", 00:20:42.374 "trsvcid": "4420" 00:20:42.374 }, 00:20:42.374 "peer_address": { 00:20:42.374 "trtype": "TCP", 00:20:42.374 "adrfam": "IPv4", 00:20:42.374 "traddr": "10.0.0.1", 00:20:42.374 "trsvcid": "37416" 00:20:42.374 }, 00:20:42.374 "auth": { 00:20:42.374 "state": "completed", 00:20:42.374 "digest": "sha384", 00:20:42.374 "dhgroup": "ffdhe6144" 00:20:42.374 } 00:20:42.374 } 00:20:42.374 ]' 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.374 00:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.632 00:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.006 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.264 00:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.830 00:20:44.830 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.830 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.830 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.088 { 00:20:45.088 "cntlid": 85, 00:20:45.088 "qid": 0, 00:20:45.088 "state": "enabled", 00:20:45.088 "listen_address": { 00:20:45.088 "trtype": "TCP", 00:20:45.088 "adrfam": "IPv4", 00:20:45.088 "traddr": "10.0.0.2", 00:20:45.088 "trsvcid": "4420" 00:20:45.088 }, 00:20:45.088 "peer_address": { 00:20:45.088 "trtype": "TCP", 00:20:45.088 "adrfam": "IPv4", 00:20:45.088 "traddr": "10.0.0.1", 00:20:45.088 "trsvcid": "37444" 00:20:45.088 }, 00:20:45.088 "auth": { 00:20:45.088 "state": "completed", 00:20:45.088 "digest": "sha384", 00:20:45.088 "dhgroup": "ffdhe6144" 00:20:45.088 } 00:20:45.088 } 00:20:45.088 ]' 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.088 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.346 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.346 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.346 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.346 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.346 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.604 00:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:46.975 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.976 00:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.593 00:20:47.593 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.593 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.593 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.851 { 00:20:47.851 "cntlid": 87, 00:20:47.851 "qid": 0, 00:20:47.851 "state": "enabled", 00:20:47.851 "listen_address": { 00:20:47.851 "trtype": "TCP", 00:20:47.851 "adrfam": "IPv4", 00:20:47.851 "traddr": "10.0.0.2", 00:20:47.851 "trsvcid": "4420" 00:20:47.851 }, 00:20:47.851 "peer_address": { 00:20:47.851 "trtype": "TCP", 00:20:47.851 "adrfam": "IPv4", 00:20:47.851 "traddr": "10.0.0.1", 00:20:47.851 "trsvcid": "37484" 00:20:47.851 }, 00:20:47.851 "auth": { 00:20:47.851 "state": "completed", 00:20:47.851 "digest": "sha384", 00:20:47.851 "dhgroup": "ffdhe6144" 00:20:47.851 } 00:20:47.851 } 00:20:47.851 ]' 00:20:47.851 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.108 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.108 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.108 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.108 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.109 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.109 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.109 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.366 00:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:49.747 00:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.005 00:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.975 00:20:50.975 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.975 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.975 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.233 { 00:20:51.233 "cntlid": 89, 00:20:51.233 "qid": 0, 00:20:51.233 "state": "enabled", 00:20:51.233 "listen_address": { 00:20:51.233 "trtype": "TCP", 00:20:51.233 "adrfam": "IPv4", 00:20:51.233 "traddr": "10.0.0.2", 00:20:51.233 "trsvcid": "4420" 00:20:51.233 }, 00:20:51.233 "peer_address": { 00:20:51.233 "trtype": "TCP", 00:20:51.233 "adrfam": "IPv4", 00:20:51.233 "traddr": "10.0.0.1", 00:20:51.233 "trsvcid": "37516" 00:20:51.233 }, 00:20:51.233 "auth": { 00:20:51.233 "state": "completed", 00:20:51.233 "digest": "sha384", 00:20:51.233 "dhgroup": "ffdhe8192" 00:20:51.233 } 00:20:51.233 } 00:20:51.233 ]' 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.233 00:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.799 00:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.733 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.991 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.248 00:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.248 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.248 00:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.181 00:20:54.181 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.181 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.181 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.439 { 00:20:54.439 "cntlid": 91, 00:20:54.439 "qid": 0, 00:20:54.439 "state": "enabled", 00:20:54.439 "listen_address": { 00:20:54.439 "trtype": "TCP", 00:20:54.439 "adrfam": "IPv4", 00:20:54.439 "traddr": "10.0.0.2", 00:20:54.439 "trsvcid": "4420" 00:20:54.439 }, 00:20:54.439 "peer_address": { 00:20:54.439 "trtype": "TCP", 00:20:54.439 "adrfam": "IPv4", 00:20:54.439 "traddr": "10.0.0.1", 00:20:54.439 "trsvcid": "46674" 00:20:54.439 }, 00:20:54.439 "auth": { 00:20:54.439 "state": "completed", 00:20:54.439 "digest": "sha384", 00:20:54.439 "dhgroup": "ffdhe8192" 00:20:54.439 } 00:20:54.439 } 00:20:54.439 ]' 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.439 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.697 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.697 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.697 00:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.955 00:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.328 00:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.699 00:20:57.699 00:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.699 00:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.699 00:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.699 { 00:20:57.699 "cntlid": 93, 00:20:57.699 "qid": 0, 00:20:57.699 "state": "enabled", 00:20:57.699 "listen_address": { 00:20:57.699 "trtype": "TCP", 00:20:57.699 "adrfam": "IPv4", 00:20:57.699 "traddr": "10.0.0.2", 00:20:57.699 "trsvcid": "4420" 00:20:57.699 }, 00:20:57.699 "peer_address": { 00:20:57.699 "trtype": "TCP", 00:20:57.699 "adrfam": "IPv4", 00:20:57.699 "traddr": "10.0.0.1", 00:20:57.699 "trsvcid": "46710" 00:20:57.699 }, 00:20:57.699 "auth": { 00:20:57.699 "state": "completed", 00:20:57.699 "digest": "sha384", 00:20:57.699 "dhgroup": "ffdhe8192" 00:20:57.699 } 00:20:57.699 } 00:20:57.699 ]' 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.699 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.956 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.956 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.956 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.214 00:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.596 00:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:59.596 00:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.968 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.968 { 00:21:00.968 "cntlid": 95, 00:21:00.968 "qid": 0, 00:21:00.968 "state": "enabled", 00:21:00.968 "listen_address": { 00:21:00.968 "trtype": "TCP", 00:21:00.968 "adrfam": "IPv4", 00:21:00.968 "traddr": "10.0.0.2", 00:21:00.968 "trsvcid": "4420" 00:21:00.968 }, 00:21:00.968 "peer_address": { 00:21:00.968 "trtype": "TCP", 00:21:00.968 "adrfam": "IPv4", 00:21:00.968 "traddr": "10.0.0.1", 00:21:00.968 "trsvcid": "46730" 00:21:00.968 }, 00:21:00.968 "auth": { 00:21:00.968 "state": "completed", 00:21:00.968 "digest": "sha384", 00:21:00.968 "dhgroup": "ffdhe8192" 00:21:00.968 } 00:21:00.968 } 00:21:00.968 ]' 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.968 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.225 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.225 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.225 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.483 00:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:02.853 00:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.853 00:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:02.853 00:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.853 00:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.853 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.430 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.430 { 00:21:03.430 "cntlid": 97, 00:21:03.430 "qid": 0, 00:21:03.430 "state": "enabled", 00:21:03.430 "listen_address": { 00:21:03.430 "trtype": "TCP", 00:21:03.430 "adrfam": "IPv4", 00:21:03.430 "traddr": "10.0.0.2", 00:21:03.430 "trsvcid": "4420" 00:21:03.430 }, 00:21:03.430 "peer_address": { 00:21:03.430 "trtype": "TCP", 00:21:03.430 "adrfam": "IPv4", 00:21:03.430 "traddr": "10.0.0.1", 00:21:03.430 "trsvcid": "55658" 00:21:03.430 }, 00:21:03.430 "auth": { 00:21:03.430 "state": "completed", 00:21:03.430 "digest": "sha512", 00:21:03.430 "dhgroup": "null" 00:21:03.430 } 00:21:03.430 } 00:21:03.430 ]' 00:21:03.430 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.686 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.686 00:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.686 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:03.686 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.686 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.686 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.686 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.944 00:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.315 00:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.316 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.316 00:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.880 00:21:05.880 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.880 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.880 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.137 { 00:21:06.137 "cntlid": 99, 00:21:06.137 "qid": 0, 00:21:06.137 "state": "enabled", 00:21:06.137 "listen_address": { 00:21:06.137 "trtype": "TCP", 00:21:06.137 "adrfam": "IPv4", 00:21:06.137 "traddr": "10.0.0.2", 00:21:06.137 "trsvcid": "4420" 00:21:06.137 }, 00:21:06.137 "peer_address": { 00:21:06.137 "trtype": "TCP", 00:21:06.137 "adrfam": "IPv4", 00:21:06.137 "traddr": "10.0.0.1", 00:21:06.137 "trsvcid": "55678" 00:21:06.137 }, 00:21:06.137 "auth": { 00:21:06.137 "state": "completed", 00:21:06.137 "digest": "sha512", 00:21:06.137 "dhgroup": "null" 00:21:06.137 } 00:21:06.137 } 00:21:06.137 ]' 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.137 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.395 00:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:21:07.767 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.767 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:07.767 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.767 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.768 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.768 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.768 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:07.768 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.025 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.321 00:21:08.321 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.321 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.321 00:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.600 { 00:21:08.600 "cntlid": 101, 00:21:08.600 "qid": 0, 00:21:08.600 "state": "enabled", 00:21:08.600 "listen_address": { 00:21:08.600 "trtype": "TCP", 00:21:08.600 "adrfam": "IPv4", 00:21:08.600 "traddr": "10.0.0.2", 00:21:08.600 "trsvcid": "4420" 00:21:08.600 }, 00:21:08.600 "peer_address": { 00:21:08.600 "trtype": "TCP", 00:21:08.600 "adrfam": "IPv4", 00:21:08.600 "traddr": "10.0.0.1", 00:21:08.600 "trsvcid": "55700" 00:21:08.600 }, 00:21:08.600 "auth": { 00:21:08.600 "state": "completed", 00:21:08.600 "digest": "sha512", 00:21:08.600 "dhgroup": "null" 00:21:08.600 } 00:21:08.600 } 00:21:08.600 ]' 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.600 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.858 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:08.858 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.858 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.858 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.858 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.114 00:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:10.482 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.483 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.483 00:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.483 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.483 00:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.047 00:21:11.047 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.047 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.047 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.047 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.305 { 00:21:11.305 "cntlid": 103, 00:21:11.305 "qid": 0, 00:21:11.305 "state": "enabled", 00:21:11.305 "listen_address": { 00:21:11.305 "trtype": "TCP", 00:21:11.305 "adrfam": "IPv4", 00:21:11.305 "traddr": "10.0.0.2", 00:21:11.305 "trsvcid": "4420" 00:21:11.305 }, 00:21:11.305 "peer_address": { 00:21:11.305 "trtype": "TCP", 00:21:11.305 "adrfam": "IPv4", 00:21:11.305 "traddr": "10.0.0.1", 00:21:11.305 "trsvcid": "55714" 00:21:11.305 }, 00:21:11.305 "auth": { 00:21:11.305 "state": "completed", 00:21:11.305 "digest": "sha512", 00:21:11.305 "dhgroup": "null" 00:21:11.305 } 00:21:11.305 } 00:21:11.305 ]' 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.305 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.562 00:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.933 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.498 00:21:13.498 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.498 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.498 00:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.757 { 00:21:13.757 "cntlid": 105, 00:21:13.757 "qid": 0, 00:21:13.757 "state": "enabled", 00:21:13.757 "listen_address": { 00:21:13.757 "trtype": "TCP", 00:21:13.757 "adrfam": "IPv4", 00:21:13.757 "traddr": "10.0.0.2", 00:21:13.757 "trsvcid": "4420" 00:21:13.757 }, 00:21:13.757 "peer_address": { 00:21:13.757 "trtype": "TCP", 00:21:13.757 "adrfam": "IPv4", 00:21:13.757 "traddr": "10.0.0.1", 00:21:13.757 "trsvcid": "33716" 00:21:13.757 }, 00:21:13.757 "auth": { 00:21:13.757 "state": "completed", 00:21:13.757 "digest": "sha512", 00:21:13.757 "dhgroup": "ffdhe2048" 00:21:13.757 } 00:21:13.757 } 00:21:13.757 ]' 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.757 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.015 00:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.385 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.643 00:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.901 00:21:15.901 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.901 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.901 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.158 { 00:21:16.158 "cntlid": 107, 00:21:16.158 "qid": 0, 00:21:16.158 "state": "enabled", 00:21:16.158 "listen_address": { 00:21:16.158 "trtype": "TCP", 00:21:16.158 "adrfam": "IPv4", 00:21:16.158 "traddr": "10.0.0.2", 00:21:16.158 "trsvcid": "4420" 00:21:16.158 }, 00:21:16.158 "peer_address": { 00:21:16.158 "trtype": "TCP", 00:21:16.158 "adrfam": "IPv4", 00:21:16.158 "traddr": "10.0.0.1", 00:21:16.158 "trsvcid": "33756" 00:21:16.158 }, 00:21:16.158 "auth": { 00:21:16.158 "state": "completed", 00:21:16.158 "digest": "sha512", 00:21:16.158 "dhgroup": "ffdhe2048" 00:21:16.158 } 00:21:16.158 } 00:21:16.158 ]' 00:21:16.158 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.415 00:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.672 00:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.055 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.313 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.570 00:21:18.571 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.571 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.571 00:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.828 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.828 { 00:21:18.828 "cntlid": 109, 00:21:18.828 "qid": 0, 00:21:18.828 "state": "enabled", 00:21:18.828 "listen_address": { 00:21:18.828 "trtype": "TCP", 00:21:18.828 "adrfam": "IPv4", 00:21:18.828 "traddr": "10.0.0.2", 00:21:18.828 "trsvcid": "4420" 00:21:18.828 }, 00:21:18.828 "peer_address": { 00:21:18.828 "trtype": "TCP", 00:21:18.828 "adrfam": "IPv4", 00:21:18.828 "traddr": "10.0.0.1", 00:21:18.828 "trsvcid": "33778" 00:21:18.828 }, 00:21:18.829 "auth": { 00:21:18.829 "state": "completed", 00:21:18.829 "digest": "sha512", 00:21:18.829 "dhgroup": "ffdhe2048" 00:21:18.829 } 00:21:18.829 } 00:21:18.829 ]' 00:21:18.829 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.829 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.829 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.829 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.829 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.086 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.086 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.086 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.344 00:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.277 00:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.842 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.100 00:21:21.100 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.100 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.100 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.358 { 00:21:21.358 "cntlid": 111, 00:21:21.358 "qid": 0, 00:21:21.358 "state": "enabled", 00:21:21.358 "listen_address": { 00:21:21.358 "trtype": "TCP", 00:21:21.358 "adrfam": "IPv4", 00:21:21.358 "traddr": "10.0.0.2", 00:21:21.358 "trsvcid": "4420" 00:21:21.358 }, 00:21:21.358 "peer_address": { 00:21:21.358 "trtype": "TCP", 00:21:21.358 "adrfam": "IPv4", 00:21:21.358 "traddr": "10.0.0.1", 00:21:21.358 "trsvcid": "33818" 00:21:21.358 }, 00:21:21.358 "auth": { 00:21:21.358 "state": "completed", 00:21:21.358 "digest": "sha512", 00:21:21.358 "dhgroup": "ffdhe2048" 00:21:21.358 } 00:21:21.358 } 00:21:21.358 ]' 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.358 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.615 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.615 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.615 00:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.872 00:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.242 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.499 00:21:23.499 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.499 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.499 00:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.755 { 00:21:23.755 "cntlid": 113, 00:21:23.755 "qid": 0, 00:21:23.755 "state": "enabled", 00:21:23.755 "listen_address": { 00:21:23.755 "trtype": "TCP", 00:21:23.755 "adrfam": "IPv4", 00:21:23.755 "traddr": "10.0.0.2", 00:21:23.755 "trsvcid": "4420" 00:21:23.755 }, 00:21:23.755 "peer_address": { 00:21:23.755 "trtype": "TCP", 00:21:23.755 "adrfam": "IPv4", 00:21:23.755 "traddr": "10.0.0.1", 00:21:23.755 "trsvcid": "48794" 00:21:23.755 }, 00:21:23.755 "auth": { 00:21:23.755 "state": "completed", 00:21:23.755 "digest": "sha512", 00:21:23.755 "dhgroup": "ffdhe3072" 00:21:23.755 } 00:21:23.755 } 00:21:23.755 ]' 00:21:23.755 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.012 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.269 00:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.672 00:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.672 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.237 00:21:26.237 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.237 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.237 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.494 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.494 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.494 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.495 { 00:21:26.495 "cntlid": 115, 00:21:26.495 "qid": 0, 00:21:26.495 "state": "enabled", 00:21:26.495 "listen_address": { 00:21:26.495 "trtype": "TCP", 00:21:26.495 "adrfam": "IPv4", 00:21:26.495 "traddr": "10.0.0.2", 00:21:26.495 "trsvcid": "4420" 00:21:26.495 }, 00:21:26.495 "peer_address": { 00:21:26.495 "trtype": "TCP", 00:21:26.495 "adrfam": "IPv4", 00:21:26.495 "traddr": "10.0.0.1", 00:21:26.495 "trsvcid": "48808" 00:21:26.495 }, 00:21:26.495 "auth": { 00:21:26.495 "state": "completed", 00:21:26.495 "digest": "sha512", 00:21:26.495 "dhgroup": "ffdhe3072" 00:21:26.495 } 00:21:26.495 } 00:21:26.495 ]' 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.495 00:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.752 00:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.125 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.690 00:21:28.690 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.690 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.690 00:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.690 { 00:21:28.690 "cntlid": 117, 00:21:28.690 "qid": 0, 00:21:28.690 "state": "enabled", 00:21:28.690 "listen_address": { 00:21:28.690 "trtype": "TCP", 00:21:28.690 "adrfam": "IPv4", 00:21:28.690 "traddr": "10.0.0.2", 00:21:28.690 "trsvcid": "4420" 00:21:28.690 }, 00:21:28.690 "peer_address": { 00:21:28.690 "trtype": "TCP", 00:21:28.690 "adrfam": "IPv4", 00:21:28.690 "traddr": "10.0.0.1", 00:21:28.690 "trsvcid": "48840" 00:21:28.690 }, 00:21:28.690 "auth": { 00:21:28.690 "state": "completed", 00:21:28.690 "digest": "sha512", 00:21:28.690 "dhgroup": "ffdhe3072" 00:21:28.690 } 00:21:28.690 } 00:21:28.690 ]' 00:21:28.690 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.960 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.217 00:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.588 00:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.588 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.154 00:21:31.154 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.154 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.154 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.412 { 00:21:31.412 "cntlid": 119, 00:21:31.412 "qid": 0, 00:21:31.412 "state": "enabled", 00:21:31.412 "listen_address": { 00:21:31.412 "trtype": "TCP", 00:21:31.412 "adrfam": "IPv4", 00:21:31.412 "traddr": "10.0.0.2", 00:21:31.412 "trsvcid": "4420" 00:21:31.412 }, 00:21:31.412 "peer_address": { 00:21:31.412 "trtype": "TCP", 00:21:31.412 "adrfam": "IPv4", 00:21:31.412 "traddr": "10.0.0.1", 00:21:31.412 "trsvcid": "48868" 00:21:31.412 }, 00:21:31.412 "auth": { 00:21:31.412 "state": "completed", 00:21:31.412 "digest": "sha512", 00:21:31.412 "dhgroup": "ffdhe3072" 00:21:31.412 } 00:21:31.412 } 00:21:31.412 ]' 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.412 00:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.669 00:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:33.039 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.040 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.297 00:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.555 00:21:33.555 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.555 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.555 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.812 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.812 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.812 00:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.812 00:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.070 { 00:21:34.070 "cntlid": 121, 00:21:34.070 "qid": 0, 00:21:34.070 "state": "enabled", 00:21:34.070 "listen_address": { 00:21:34.070 "trtype": "TCP", 00:21:34.070 "adrfam": "IPv4", 00:21:34.070 "traddr": "10.0.0.2", 00:21:34.070 "trsvcid": "4420" 00:21:34.070 }, 00:21:34.070 "peer_address": { 00:21:34.070 "trtype": "TCP", 00:21:34.070 "adrfam": "IPv4", 00:21:34.070 "traddr": "10.0.0.1", 00:21:34.070 "trsvcid": "36462" 00:21:34.070 }, 00:21:34.070 "auth": { 00:21:34.070 "state": "completed", 00:21:34.070 "digest": "sha512", 00:21:34.070 "dhgroup": "ffdhe4096" 00:21:34.070 } 00:21:34.070 } 00:21:34.070 ]' 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.070 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.328 00:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.698 00:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.955 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.212 00:21:36.212 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.212 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.212 00:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.776 { 00:21:36.776 "cntlid": 123, 00:21:36.776 "qid": 0, 00:21:36.776 "state": "enabled", 00:21:36.776 "listen_address": { 00:21:36.776 "trtype": "TCP", 00:21:36.776 "adrfam": "IPv4", 00:21:36.776 "traddr": "10.0.0.2", 00:21:36.776 "trsvcid": "4420" 00:21:36.776 }, 00:21:36.776 "peer_address": { 00:21:36.776 "trtype": "TCP", 00:21:36.776 "adrfam": "IPv4", 00:21:36.776 "traddr": "10.0.0.1", 00:21:36.776 "trsvcid": "36486" 00:21:36.776 }, 00:21:36.776 "auth": { 00:21:36.776 "state": "completed", 00:21:36.776 "digest": "sha512", 00:21:36.776 "dhgroup": "ffdhe4096" 00:21:36.776 } 00:21:36.776 } 00:21:36.776 ]' 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.776 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.034 00:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.408 00:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.973 00:21:38.973 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.973 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.973 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.231 { 00:21:39.231 "cntlid": 125, 00:21:39.231 "qid": 0, 00:21:39.231 "state": "enabled", 00:21:39.231 "listen_address": { 00:21:39.231 "trtype": "TCP", 00:21:39.231 "adrfam": "IPv4", 00:21:39.231 "traddr": "10.0.0.2", 00:21:39.231 "trsvcid": "4420" 00:21:39.231 }, 00:21:39.231 "peer_address": { 00:21:39.231 "trtype": "TCP", 00:21:39.231 "adrfam": "IPv4", 00:21:39.231 "traddr": "10.0.0.1", 00:21:39.231 "trsvcid": "36516" 00:21:39.231 }, 00:21:39.231 "auth": { 00:21:39.231 "state": "completed", 00:21:39.231 "digest": "sha512", 00:21:39.231 "dhgroup": "ffdhe4096" 00:21:39.231 } 00:21:39.231 } 00:21:39.231 ]' 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.231 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.490 00:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.863 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.124 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.382 00:21:41.640 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.640 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.640 00:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.898 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.898 { 00:21:41.898 "cntlid": 127, 00:21:41.898 "qid": 0, 00:21:41.898 "state": "enabled", 00:21:41.898 "listen_address": { 00:21:41.898 "trtype": "TCP", 00:21:41.898 "adrfam": "IPv4", 00:21:41.898 "traddr": "10.0.0.2", 00:21:41.898 "trsvcid": "4420" 00:21:41.898 }, 00:21:41.899 "peer_address": { 00:21:41.899 "trtype": "TCP", 00:21:41.899 "adrfam": "IPv4", 00:21:41.899 "traddr": "10.0.0.1", 00:21:41.899 "trsvcid": "43072" 00:21:41.899 }, 00:21:41.899 "auth": { 00:21:41.899 "state": "completed", 00:21:41.899 "digest": "sha512", 00:21:41.899 "dhgroup": "ffdhe4096" 00:21:41.899 } 00:21:41.899 } 00:21:41.899 ]' 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.899 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.157 00:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.553 00:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.811 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.384 00:21:44.384 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.384 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.384 00:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.648 { 00:21:44.648 "cntlid": 129, 00:21:44.648 "qid": 0, 00:21:44.648 "state": "enabled", 00:21:44.648 "listen_address": { 00:21:44.648 "trtype": "TCP", 00:21:44.648 "adrfam": "IPv4", 00:21:44.648 "traddr": "10.0.0.2", 00:21:44.648 "trsvcid": "4420" 00:21:44.648 }, 00:21:44.648 "peer_address": { 00:21:44.648 "trtype": "TCP", 00:21:44.648 "adrfam": "IPv4", 00:21:44.648 "traddr": "10.0.0.1", 00:21:44.648 "trsvcid": "43092" 00:21:44.648 }, 00:21:44.648 "auth": { 00:21:44.648 "state": "completed", 00:21:44.648 "digest": "sha512", 00:21:44.648 "dhgroup": "ffdhe6144" 00:21:44.648 } 00:21:44.648 } 00:21:44.648 ]' 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.648 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.906 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.906 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.906 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.906 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.906 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.164 00:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.538 00:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.538 00:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.538 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.538 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.473 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.473 { 00:21:47.473 "cntlid": 131, 00:21:47.473 "qid": 0, 00:21:47.473 "state": "enabled", 00:21:47.473 "listen_address": { 00:21:47.473 "trtype": "TCP", 00:21:47.473 "adrfam": "IPv4", 00:21:47.473 "traddr": "10.0.0.2", 00:21:47.473 "trsvcid": "4420" 00:21:47.473 }, 00:21:47.473 "peer_address": { 00:21:47.473 "trtype": "TCP", 00:21:47.473 "adrfam": "IPv4", 00:21:47.473 "traddr": "10.0.0.1", 00:21:47.473 "trsvcid": "43108" 00:21:47.473 }, 00:21:47.473 "auth": { 00:21:47.473 "state": "completed", 00:21:47.473 "digest": "sha512", 00:21:47.473 "dhgroup": "ffdhe6144" 00:21:47.473 } 00:21:47.473 } 00:21:47.473 ]' 00:21:47.473 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.730 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.730 00:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.730 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.730 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.730 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.730 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.730 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.988 00:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.363 00:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.296 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.296 { 00:21:50.296 "cntlid": 133, 00:21:50.296 "qid": 0, 00:21:50.296 "state": "enabled", 00:21:50.296 "listen_address": { 00:21:50.296 "trtype": "TCP", 00:21:50.296 "adrfam": "IPv4", 00:21:50.296 "traddr": "10.0.0.2", 00:21:50.296 "trsvcid": "4420" 00:21:50.296 }, 00:21:50.296 "peer_address": { 00:21:50.296 "trtype": "TCP", 00:21:50.296 "adrfam": "IPv4", 00:21:50.296 "traddr": "10.0.0.1", 00:21:50.296 "trsvcid": "43120" 00:21:50.296 }, 00:21:50.296 "auth": { 00:21:50.296 "state": "completed", 00:21:50.296 "digest": "sha512", 00:21:50.296 "dhgroup": "ffdhe6144" 00:21:50.296 } 00:21:50.296 } 00:21:50.296 ]' 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.296 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.553 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.553 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.553 00:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.812 00:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.182 00:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.746 00:21:52.746 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.746 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.746 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:53.003 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.003 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.003 00:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.003 00:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:53.261 { 00:21:53.261 "cntlid": 135, 00:21:53.261 "qid": 0, 00:21:53.261 "state": "enabled", 00:21:53.261 "listen_address": { 00:21:53.261 "trtype": "TCP", 00:21:53.261 "adrfam": "IPv4", 00:21:53.261 "traddr": "10.0.0.2", 00:21:53.261 "trsvcid": "4420" 00:21:53.261 }, 00:21:53.261 "peer_address": { 00:21:53.261 "trtype": "TCP", 00:21:53.261 "adrfam": "IPv4", 00:21:53.261 "traddr": "10.0.0.1", 00:21:53.261 "trsvcid": "51578" 00:21:53.261 }, 00:21:53.261 "auth": { 00:21:53.261 "state": "completed", 00:21:53.261 "digest": "sha512", 00:21:53.261 "dhgroup": "ffdhe6144" 00:21:53.261 } 00:21:53.261 } 00:21:53.261 ]' 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.261 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.518 00:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.891 00:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.265 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.265 { 00:21:56.265 "cntlid": 137, 00:21:56.265 "qid": 0, 00:21:56.265 "state": "enabled", 00:21:56.265 "listen_address": { 00:21:56.265 "trtype": "TCP", 00:21:56.265 "adrfam": "IPv4", 00:21:56.265 "traddr": "10.0.0.2", 00:21:56.265 "trsvcid": "4420" 00:21:56.265 }, 00:21:56.265 "peer_address": { 00:21:56.265 "trtype": "TCP", 00:21:56.265 "adrfam": "IPv4", 00:21:56.265 "traddr": "10.0.0.1", 00:21:56.265 "trsvcid": "51606" 00:21:56.265 }, 00:21:56.265 "auth": { 00:21:56.265 "state": "completed", 00:21:56.265 "digest": "sha512", 00:21:56.265 "dhgroup": "ffdhe8192" 00:21:56.265 } 00:21:56.265 } 00:21:56.265 ]' 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.265 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.523 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.523 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.523 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.523 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.523 00:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.781 00:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.156 00:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.530 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.530 00:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.530 00:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.530 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.530 { 00:21:59.530 "cntlid": 139, 00:21:59.530 "qid": 0, 00:21:59.530 "state": "enabled", 00:21:59.530 "listen_address": { 00:21:59.530 "trtype": "TCP", 00:21:59.530 "adrfam": "IPv4", 00:21:59.530 "traddr": "10.0.0.2", 00:21:59.530 "trsvcid": "4420" 00:21:59.530 }, 00:21:59.530 "peer_address": { 00:21:59.530 "trtype": "TCP", 00:21:59.530 "adrfam": "IPv4", 00:21:59.530 "traddr": "10.0.0.1", 00:21:59.530 "trsvcid": "51634" 00:21:59.530 }, 00:21:59.530 "auth": { 00:21:59.530 "state": "completed", 00:21:59.530 "digest": "sha512", 00:21:59.530 "dhgroup": "ffdhe8192" 00:21:59.530 } 00:21:59.530 } 00:21:59.530 ]' 00:21:59.530 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.787 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.045 00:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:NjZmODU5YzQ5ZWExNjQ5NDY1MDEwNDQ4NmQ3ODZlNDSPMxMi: --dhchap-ctrl-secret DHHC-1:02:NWVmNzc5NTNjNTY4N2YzMjNkODBlYTg5ZWJiNzYwMDMwMDNmYzNkNDBlNmUyNGRhH0oUog==: 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.426 00:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.414 00:22:02.414 00:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.414 00:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.414 00:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.980 { 00:22:02.980 "cntlid": 141, 00:22:02.980 "qid": 0, 00:22:02.980 "state": "enabled", 00:22:02.980 "listen_address": { 00:22:02.980 "trtype": "TCP", 00:22:02.980 "adrfam": "IPv4", 00:22:02.980 "traddr": "10.0.0.2", 00:22:02.980 "trsvcid": "4420" 00:22:02.980 }, 00:22:02.980 "peer_address": { 00:22:02.980 "trtype": "TCP", 00:22:02.980 "adrfam": "IPv4", 00:22:02.980 "traddr": "10.0.0.1", 00:22:02.980 "trsvcid": "51426" 00:22:02.980 }, 00:22:02.980 "auth": { 00:22:02.980 "state": "completed", 00:22:02.980 "digest": "sha512", 00:22:02.980 "dhgroup": "ffdhe8192" 00:22:02.980 } 00:22:02.980 } 00:22:02.980 ]' 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.980 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.239 00:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:NzVmZDk2MWVmZmVhNTg4MjEzOTVkMzYyMmY5NWFhZmYzODE3YjM4ZGNjMmQ3ZTcxH/p4ZA==: --dhchap-ctrl-secret DHHC-1:01:YmQ0MmE5M2JlNWVhNzdjZTdjMzg2YjNjNGU4MWNhM2RYztDy: 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.613 00:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.871 00:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.806 00:22:05.806 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.806 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.806 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.063 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.063 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.063 00:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.063 00:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.064 00:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.064 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.064 { 00:22:06.064 "cntlid": 143, 00:22:06.064 "qid": 0, 00:22:06.064 "state": "enabled", 00:22:06.064 "listen_address": { 00:22:06.064 "trtype": "TCP", 00:22:06.064 "adrfam": "IPv4", 00:22:06.064 "traddr": "10.0.0.2", 00:22:06.064 "trsvcid": "4420" 00:22:06.064 }, 00:22:06.064 "peer_address": { 00:22:06.064 "trtype": "TCP", 00:22:06.064 "adrfam": "IPv4", 00:22:06.064 "traddr": "10.0.0.1", 00:22:06.064 "trsvcid": "51456" 00:22:06.064 }, 00:22:06.064 "auth": { 00:22:06.064 "state": "completed", 00:22:06.064 "digest": "sha512", 00:22:06.064 "dhgroup": "ffdhe8192" 00:22:06.064 } 00:22:06.064 } 00:22:06.064 ]' 00:22:06.064 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.064 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.064 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.322 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.322 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.322 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.322 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.322 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.580 00:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.965 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.966 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.966 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.966 00:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.966 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.966 00:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.337 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.337 00:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.338 { 00:22:09.338 "cntlid": 145, 00:22:09.338 "qid": 0, 00:22:09.338 "state": "enabled", 00:22:09.338 "listen_address": { 00:22:09.338 "trtype": "TCP", 00:22:09.338 "adrfam": "IPv4", 00:22:09.338 "traddr": "10.0.0.2", 00:22:09.338 "trsvcid": "4420" 00:22:09.338 }, 00:22:09.338 "peer_address": { 00:22:09.338 "trtype": "TCP", 00:22:09.338 "adrfam": "IPv4", 00:22:09.338 "traddr": "10.0.0.1", 00:22:09.338 "trsvcid": "51484" 00:22:09.338 }, 00:22:09.338 "auth": { 00:22:09.338 "state": "completed", 00:22:09.338 "digest": "sha512", 00:22:09.338 "dhgroup": "ffdhe8192" 00:22:09.338 } 00:22:09.338 } 00:22:09.338 ]' 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.338 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.595 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.595 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.595 00:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.852 00:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:N2ZjYzlmMWFlZTFiYTg5ZTgzOTk0NDhjMjY0ZjhkYjI5ZDI3YmNlNmNlOTIwOWNi0tDqBQ==: --dhchap-ctrl-secret DHHC-1:03:NTM0MTMyMjY2ZWRkMTkwNmExYWUzYjc4NjgyNjE4MWNiZDQ4OTRhOWM5MTBmMDhlYmIxY2EzZTU1ZDVmOTZlZOesax4=: 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:11.221 00:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.153 request: 00:22:12.153 { 00:22:12.153 "name": "nvme0", 00:22:12.153 "trtype": "tcp", 00:22:12.153 "traddr": "10.0.0.2", 00:22:12.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:12.153 "adrfam": "ipv4", 00:22:12.153 "trsvcid": "4420", 00:22:12.153 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.153 "dhchap_key": "key2", 00:22:12.153 "method": "bdev_nvme_attach_controller", 00:22:12.153 "req_id": 1 00:22:12.153 } 00:22:12.153 Got JSON-RPC error response 00:22:12.153 response: 00:22:12.153 { 00:22:12.153 "code": -5, 00:22:12.153 "message": "Input/output error" 00:22:12.153 } 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.153 00:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:13.083 request: 00:22:13.083 { 00:22:13.083 "name": "nvme0", 00:22:13.083 "trtype": "tcp", 00:22:13.083 "traddr": "10.0.0.2", 00:22:13.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:13.083 "adrfam": "ipv4", 00:22:13.083 "trsvcid": "4420", 00:22:13.083 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.083 "dhchap_key": "key1", 00:22:13.083 "dhchap_ctrlr_key": "ckey2", 00:22:13.083 "method": "bdev_nvme_attach_controller", 00:22:13.083 "req_id": 1 00:22:13.083 } 00:22:13.083 Got JSON-RPC error response 00:22:13.083 response: 00:22:13.083 { 00:22:13.083 "code": -5, 00:22:13.083 "message": "Input/output error" 00:22:13.083 } 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.083 00:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.057 request: 00:22:14.057 { 00:22:14.057 "name": "nvme0", 00:22:14.057 "trtype": "tcp", 00:22:14.057 "traddr": "10.0.0.2", 00:22:14.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:14.057 "adrfam": "ipv4", 00:22:14.057 "trsvcid": "4420", 00:22:14.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.057 "dhchap_key": "key1", 00:22:14.057 "dhchap_ctrlr_key": "ckey1", 00:22:14.057 "method": "bdev_nvme_attach_controller", 00:22:14.057 "req_id": 1 00:22:14.057 } 00:22:14.057 Got JSON-RPC error response 00:22:14.057 response: 00:22:14.057 { 00:22:14.057 "code": -5, 00:22:14.057 "message": "Input/output error" 00:22:14.057 } 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1267248 ']' 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1267248' 00:22:14.057 killing process with pid 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1267248 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1286816 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1286816 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1286816 ']' 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.057 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1286816 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 1286816 ']' 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:14.620 00:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:14.876 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.877 00:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.809 00:22:15.809 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.809 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.809 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.067 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.067 { 00:22:16.067 "cntlid": 1, 00:22:16.067 "qid": 0, 00:22:16.067 "state": "enabled", 00:22:16.067 "listen_address": { 00:22:16.067 "trtype": "TCP", 00:22:16.067 "adrfam": "IPv4", 00:22:16.067 "traddr": "10.0.0.2", 00:22:16.067 "trsvcid": "4420" 00:22:16.067 }, 00:22:16.067 "peer_address": { 00:22:16.067 "trtype": "TCP", 00:22:16.067 "adrfam": "IPv4", 00:22:16.067 "traddr": "10.0.0.1", 00:22:16.067 "trsvcid": "56010" 00:22:16.067 }, 00:22:16.068 "auth": { 00:22:16.068 "state": "completed", 00:22:16.068 "digest": "sha512", 00:22:16.068 "dhgroup": "ffdhe8192" 00:22:16.068 } 00:22:16.068 } 00:22:16.068 ]' 00:22:16.068 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.325 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.583 00:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NGQ3NTE0OTdhODA0OGE0ODlhMjM4MzZhNDhjZjhmMWI0MDcwNzY1ODI1YjkxOTAyMDIwODk1MmI0MDc2YzdhYvWvRmM=: 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.977 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.235 request: 00:22:18.235 { 00:22:18.235 "name": "nvme0", 00:22:18.235 "trtype": "tcp", 00:22:18.235 "traddr": "10.0.0.2", 00:22:18.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:18.235 "adrfam": "ipv4", 00:22:18.235 "trsvcid": "4420", 00:22:18.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.235 "dhchap_key": "key3", 00:22:18.235 "method": "bdev_nvme_attach_controller", 00:22:18.235 "req_id": 1 00:22:18.235 } 00:22:18.235 Got JSON-RPC error response 00:22:18.235 response: 00:22:18.235 { 00:22:18.235 "code": -5, 00:22:18.235 "message": "Input/output error" 00:22:18.235 } 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:18.235 00:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.802 request: 00:22:18.802 { 00:22:18.802 "name": "nvme0", 00:22:18.802 "trtype": "tcp", 00:22:18.802 "traddr": "10.0.0.2", 00:22:18.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:18.802 "adrfam": "ipv4", 00:22:18.802 "trsvcid": "4420", 00:22:18.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.802 "dhchap_key": "key3", 00:22:18.802 "method": "bdev_nvme_attach_controller", 00:22:18.802 "req_id": 1 00:22:18.802 } 00:22:18.802 Got JSON-RPC error response 00:22:18.802 response: 00:22:18.802 { 00:22:18.802 "code": -5, 00:22:18.802 "message": "Input/output error" 00:22:18.802 } 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.802 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.059 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:19.317 request: 00:22:19.317 { 00:22:19.317 "name": "nvme0", 00:22:19.317 "trtype": "tcp", 00:22:19.317 "traddr": "10.0.0.2", 00:22:19.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:22:19.317 "adrfam": "ipv4", 00:22:19.317 "trsvcid": "4420", 00:22:19.317 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:19.317 "dhchap_key": "key0", 00:22:19.317 "dhchap_ctrlr_key": "key1", 00:22:19.317 "method": "bdev_nvme_attach_controller", 00:22:19.317 "req_id": 1 00:22:19.317 } 00:22:19.317 Got JSON-RPC error response 00:22:19.317 response: 00:22:19.317 { 00:22:19.317 "code": -5, 00:22:19.317 "message": "Input/output error" 00:22:19.317 } 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:19.317 00:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:19.633 00:22:19.633 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:19.633 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:19.633 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.891 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.891 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.891 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1267322 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1267322 ']' 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1267322 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1267322 00:22:20.149 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:20.150 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:20.150 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1267322' 00:22:20.150 killing process with pid 1267322 00:22:20.150 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1267322 00:22:20.150 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1267322 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.408 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.408 rmmod nvme_tcp 00:22:20.408 rmmod nvme_fabrics 00:22:20.666 rmmod nvme_keyring 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1286816 ']' 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1286816 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 1286816 ']' 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 1286816 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1286816 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:20.666 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:20.667 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1286816' 00:22:20.667 killing process with pid 1286816 00:22:20.667 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 1286816 00:22:20.667 00:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 1286816 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.667 00:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.199 00:05:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.200 00:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.15v /tmp/spdk.key-sha256.S5y /tmp/spdk.key-sha384.PFi /tmp/spdk.key-sha512.JPO /tmp/spdk.key-sha512.Vdr /tmp/spdk.key-sha384.Wjz /tmp/spdk.key-sha256.TS9 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:23.200 00:22:23.200 real 3m36.495s 00:22:23.200 user 8m23.055s 00:22:23.200 sys 0m25.686s 00:22:23.200 00:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:23.200 00:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.200 ************************************ 00:22:23.200 END TEST nvmf_auth_target 00:22:23.200 ************************************ 00:22:23.200 00:05:57 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:23.200 00:05:57 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.200 00:05:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:23.200 00:05:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:23.200 00:05:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.200 ************************************ 00:22:23.200 START TEST nvmf_bdevio_no_huge 00:22:23.200 ************************************ 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.200 * Looking for test storage... 00:22:23.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.200 00:05:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:24.579 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:24.579 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:24.579 Found net devices under 0000:08:00.0: cvl_0_0 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:24.579 Found net devices under 0000:08:00.1: cvl_0_1 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.579 00:05:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:24.579 00:22:24.579 --- 10.0.0.2 ping statistics --- 00:22:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.579 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:22:24.579 00:22:24.579 --- 10.0.0.1 ping statistics --- 00:22:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.579 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.579 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1288902 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1288902 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1288902 ']' 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:24.580 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.838 [2024-07-16 00:05:59.134597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:24.838 [2024-07-16 00:05:59.134707] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:24.838 [2024-07-16 00:05:59.202341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.838 [2024-07-16 00:05:59.292033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.838 [2024-07-16 00:05:59.292092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.838 [2024-07-16 00:05:59.292108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.838 [2024-07-16 00:05:59.292121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.838 [2024-07-16 00:05:59.292133] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.838 [2024-07-16 00:05:59.292228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.838 [2024-07-16 00:05:59.292281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:24.838 [2024-07-16 00:05:59.292333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:24.838 [2024-07-16 00:05:59.292335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 [2024-07-16 00:05:59.417294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 Malloc0 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.097 [2024-07-16 00:05:59.455745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.097 { 00:22:25.097 "params": { 00:22:25.097 "name": "Nvme$subsystem", 00:22:25.097 "trtype": "$TEST_TRANSPORT", 00:22:25.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.097 "adrfam": "ipv4", 00:22:25.097 "trsvcid": "$NVMF_PORT", 00:22:25.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.097 "hdgst": ${hdgst:-false}, 00:22:25.097 "ddgst": ${ddgst:-false} 00:22:25.097 }, 00:22:25.097 "method": "bdev_nvme_attach_controller" 00:22:25.097 } 00:22:25.097 EOF 00:22:25.097 )") 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:25.097 00:05:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:25.097 "params": { 00:22:25.097 "name": "Nvme1", 00:22:25.097 "trtype": "tcp", 00:22:25.097 "traddr": "10.0.0.2", 00:22:25.097 "adrfam": "ipv4", 00:22:25.097 "trsvcid": "4420", 00:22:25.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.097 "hdgst": false, 00:22:25.097 "ddgst": false 00:22:25.097 }, 00:22:25.097 "method": "bdev_nvme_attach_controller" 00:22:25.097 }' 00:22:25.097 [2024-07-16 00:05:59.503857] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:25.097 [2024-07-16 00:05:59.503953] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1288932 ] 00:22:25.097 [2024-07-16 00:05:59.563115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:25.355 [2024-07-16 00:05:59.652843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.355 [2024-07-16 00:05:59.652895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.355 [2024-07-16 00:05:59.652898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.355 I/O targets: 00:22:25.356 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:25.356 00:22:25.356 00:22:25.356 CUnit - A unit testing framework for C - Version 2.1-3 00:22:25.356 http://cunit.sourceforge.net/ 00:22:25.356 00:22:25.356 00:22:25.356 Suite: bdevio tests on: Nvme1n1 00:22:25.614 Test: blockdev write read block ...passed 00:22:25.614 Test: blockdev write zeroes read block ...passed 00:22:25.614 Test: blockdev write zeroes read no split ...passed 00:22:25.614 Test: blockdev write zeroes read split ...passed 00:22:25.614 Test: blockdev write zeroes read split partial ...passed 00:22:25.614 Test: blockdev reset ...[2024-07-16 00:06:00.013061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.614 [2024-07-16 00:06:00.013206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e3ca0 (9): Bad file descriptor 00:22:25.614 [2024-07-16 00:06:00.069800] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.614 passed 00:22:25.614 Test: blockdev write read 8 blocks ...passed 00:22:25.614 Test: blockdev write read size > 128k ...passed 00:22:25.614 Test: blockdev write read invalid size ...passed 00:22:25.614 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:25.614 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:25.614 Test: blockdev write read max offset ...passed 00:22:25.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:25.872 Test: blockdev writev readv 8 blocks ...passed 00:22:25.872 Test: blockdev writev readv 30 x 1block ...passed 00:22:25.872 Test: blockdev writev readv block ...passed 00:22:25.872 Test: blockdev writev readv size > 128k ...passed 00:22:25.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:25.872 Test: blockdev comparev and writev ...[2024-07-16 00:06:00.284661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.284700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.284727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.284746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.872 [2024-07-16 00:06:00.285950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.872 [2024-07-16 00:06:00.285975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.872 passed 00:22:25.872 Test: blockdev nvme passthru rw ...passed 00:22:25.872 Test: blockdev nvme passthru vendor specific ...[2024-07-16 00:06:00.369416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.873 [2024-07-16 00:06:00.369445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.873 [2024-07-16 00:06:00.369603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.873 [2024-07-16 00:06:00.369626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.873 [2024-07-16 00:06:00.369778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.873 [2024-07-16 00:06:00.369803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.873 [2024-07-16 00:06:00.369962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.873 [2024-07-16 00:06:00.369987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.873 passed 00:22:25.873 Test: blockdev nvme admin passthru ...passed 00:22:26.131 Test: blockdev copy ...passed 00:22:26.131 00:22:26.131 Run Summary: Type Total Ran Passed Failed Inactive 00:22:26.131 suites 1 1 n/a 0 0 00:22:26.131 tests 23 23 23 0 0 00:22:26.131 asserts 152 152 152 0 n/a 00:22:26.131 00:22:26.131 Elapsed time = 1.148 seconds 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.389 rmmod nvme_tcp 00:22:26.389 rmmod nvme_fabrics 00:22:26.389 rmmod nvme_keyring 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1288902 ']' 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1288902 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1288902 ']' 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1288902 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1288902 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1288902' 00:22:26.389 killing process with pid 1288902 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1288902 00:22:26.389 00:06:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1288902 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.957 00:06:01 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.860 00:06:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.860 00:22:28.860 real 0m5.984s 00:22:28.860 user 0m9.857s 00:22:28.860 sys 0m2.233s 00:22:28.860 00:06:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.860 00:06:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.860 ************************************ 00:22:28.860 END TEST nvmf_bdevio_no_huge 00:22:28.860 ************************************ 00:22:28.860 00:06:03 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:28.860 00:06:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:28.860 00:06:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.860 00:06:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.860 ************************************ 00:22:28.860 START TEST nvmf_tls 00:22:28.860 ************************************ 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:28.860 * Looking for test storage... 00:22:28.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.860 00:06:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:30.764 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:30.764 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:30.764 Found net devices under 0000:08:00.0: cvl_0_0 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:30.764 Found net devices under 0000:08:00.1: cvl_0_1 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.764 00:06:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:30.764 00:22:30.764 --- 10.0.0.2 ping statistics --- 00:22:30.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.764 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:30.764 00:22:30.764 --- 10.0.0.1 ping statistics --- 00:22:30.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.764 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1290617 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1290617 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1290617 ']' 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.764 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.765 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.765 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.765 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.765 [2024-07-16 00:06:05.149667] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:30.765 [2024-07-16 00:06:05.149752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.765 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.765 [2024-07-16 00:06:05.216389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.023 [2024-07-16 00:06:05.302631] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.023 [2024-07-16 00:06:05.302691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.023 [2024-07-16 00:06:05.302707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.023 [2024-07-16 00:06:05.302722] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.023 [2024-07-16 00:06:05.302735] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.023 [2024-07-16 00:06:05.302764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:31.023 00:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:31.281 true 00:22:31.281 00:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.281 00:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:31.539 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:31.539 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:31.539 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:32.104 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:32.104 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.362 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:32.362 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:32.362 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:32.620 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.620 00:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:32.878 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:32.878 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:32.878 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.878 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:33.136 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:33.136 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:33.136 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:33.395 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:33.395 00:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:33.653 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:33.653 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:33.653 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:33.911 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:33.911 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.M6eNNvCF6f 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.FC6DVwpg8n 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.M6eNNvCF6f 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FC6DVwpg8n 00:22:34.476 00:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:34.734 00:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:34.991 00:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.M6eNNvCF6f 00:22:34.991 00:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.M6eNNvCF6f 00:22:34.991 00:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.248 [2024-07-16 00:06:09.742766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.522 00:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.785 00:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.042 [2024-07-16 00:06:10.332385] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.042 [2024-07-16 00:06:10.332600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.042 00:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:36.299 malloc0 00:22:36.299 00:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.557 00:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M6eNNvCF6f 00:22:36.813 [2024-07-16 00:06:11.225250] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:36.813 00:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.M6eNNvCF6f 00:22:36.813 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.054 Initializing NVMe Controllers 00:22:49.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.054 Initialization complete. Launching workers. 00:22:49.054 ======================================================== 00:22:49.054 Latency(us) 00:22:49.054 Device Information : IOPS MiB/s Average min max 00:22:49.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7483.60 29.23 8554.87 1381.44 11787.44 00:22:49.054 ======================================================== 00:22:49.054 Total : 7483.60 29.23 8554.87 1381.44 11787.44 00:22:49.054 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.M6eNNvCF6f 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M6eNNvCF6f' 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1292602 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1292602 /var/tmp/bdevperf.sock 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1292602 ']' 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.054 [2024-07-16 00:06:21.404844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:49.054 [2024-07-16 00:06:21.404948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292602 ] 00:22:49.054 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.054 [2024-07-16 00:06:21.465197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.054 [2024-07-16 00:06:21.552786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:49.054 00:06:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M6eNNvCF6f 00:22:49.054 [2024-07-16 00:06:21.931767] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.054 [2024-07-16 00:06:21.931900] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:49.054 TLSTESTn1 00:22:49.054 00:06:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:49.054 Running I/O for 10 seconds... 00:22:59.013 00:22:59.013 Latency(us) 00:22:59.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.013 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.013 Verification LBA range: start 0x0 length 0x2000 00:22:59.013 TLSTESTn1 : 10.02 3282.73 12.82 0.00 0.00 38917.14 9854.67 45632.47 00:22:59.013 =================================================================================================================== 00:22:59.013 Total : 3282.73 12.82 0.00 0.00 38917.14 9854.67 45632.47 00:22:59.013 0 00:22:59.013 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.013 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1292602 00:22:59.013 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1292602 ']' 00:22:59.013 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1292602 00:22:59.013 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1292602 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1292602' 00:22:59.014 killing process with pid 1292602 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1292602 00:22:59.014 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.014 00:22:59.014 Latency(us) 00:22:59.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.014 =================================================================================================================== 00:22:59.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.014 [2024-07-16 00:06:32.221575] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1292602 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC6DVwpg8n 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC6DVwpg8n 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FC6DVwpg8n 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FC6DVwpg8n' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1293550 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1293550 /var/tmp/bdevperf.sock 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1293550 ']' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.014 [2024-07-16 00:06:32.432547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:59.014 [2024-07-16 00:06:32.432648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293550 ] 00:22:59.014 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.014 [2024-07-16 00:06:32.493207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.014 [2024-07-16 00:06:32.580746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FC6DVwpg8n 00:22:59.014 [2024-07-16 00:06:32.959910] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.014 [2024-07-16 00:06:32.960038] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.014 [2024-07-16 00:06:32.970607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.014 [2024-07-16 00:06:32.971233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a6c0 (107): Transport endpoint is not connected 00:22:59.014 [2024-07-16 00:06:32.972223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a6c0 (9): Bad file descriptor 00:22:59.014 [2024-07-16 00:06:32.973222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.014 [2024-07-16 00:06:32.973244] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.014 [2024-07-16 00:06:32.973264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.014 request: 00:22:59.014 { 00:22:59.014 "name": "TLSTEST", 00:22:59.014 "trtype": "tcp", 00:22:59.014 "traddr": "10.0.0.2", 00:22:59.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.014 "adrfam": "ipv4", 00:22:59.014 "trsvcid": "4420", 00:22:59.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.014 "psk": "/tmp/tmp.FC6DVwpg8n", 00:22:59.014 "method": "bdev_nvme_attach_controller", 00:22:59.014 "req_id": 1 00:22:59.014 } 00:22:59.014 Got JSON-RPC error response 00:22:59.014 response: 00:22:59.014 { 00:22:59.014 "code": -5, 00:22:59.014 "message": "Input/output error" 00:22:59.014 } 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1293550 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1293550 ']' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1293550 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.014 00:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293550 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293550' 00:22:59.014 killing process with pid 1293550 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1293550 00:22:59.014 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.014 00:22:59.014 Latency(us) 00:22:59.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.014 =================================================================================================================== 00:22:59.014 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.014 [2024-07-16 00:06:33.018236] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1293550 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M6eNNvCF6f 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M6eNNvCF6f 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.M6eNNvCF6f 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M6eNNvCF6f' 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1293651 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1293651 /var/tmp/bdevperf.sock 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1293651 ']' 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.014 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.014 [2024-07-16 00:06:33.226769] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:59.014 [2024-07-16 00:06:33.226871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293651 ] 00:22:59.014 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.014 [2024-07-16 00:06:33.303317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.014 [2024-07-16 00:06:33.409421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.271 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.271 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.271 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.M6eNNvCF6f 00:22:59.530 [2024-07-16 00:06:33.875538] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.530 [2024-07-16 00:06:33.875668] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.530 [2024-07-16 00:06:33.882849] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:59.530 [2024-07-16 00:06:33.882883] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:59.530 [2024-07-16 00:06:33.882925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.530 [2024-07-16 00:06:33.883809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151f6c0 (107): Transport endpoint is not connected 00:22:59.530 [2024-07-16 00:06:33.884775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151f6c0 (9): Bad file descriptor 00:22:59.530 [2024-07-16 00:06:33.885782] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.530 [2024-07-16 00:06:33.885811] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.530 [2024-07-16 00:06:33.885838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.530 request: 00:22:59.530 { 00:22:59.530 "name": "TLSTEST", 00:22:59.530 "trtype": "tcp", 00:22:59.530 "traddr": "10.0.0.2", 00:22:59.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:59.530 "adrfam": "ipv4", 00:22:59.530 "trsvcid": "4420", 00:22:59.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.530 "psk": "/tmp/tmp.M6eNNvCF6f", 00:22:59.530 "method": "bdev_nvme_attach_controller", 00:22:59.530 "req_id": 1 00:22:59.530 } 00:22:59.530 Got JSON-RPC error response 00:22:59.530 response: 00:22:59.530 { 00:22:59.530 "code": -5, 00:22:59.530 "message": "Input/output error" 00:22:59.530 } 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1293651 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1293651 ']' 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1293651 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293651 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293651' 00:22:59.530 killing process with pid 1293651 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1293651 00:22:59.530 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.530 00:22:59.530 Latency(us) 00:22:59.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.530 =================================================================================================================== 00:22:59.530 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.530 [2024-07-16 00:06:33.934381] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.530 00:06:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1293651 00:22:59.788 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:59.788 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:59.788 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:59.788 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M6eNNvCF6f 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M6eNNvCF6f 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.M6eNNvCF6f 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.M6eNNvCF6f' 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1293679 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1293679 /var/tmp/bdevperf.sock 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1293679 ']' 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.789 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.789 [2024-07-16 00:06:34.138553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:59.789 [2024-07-16 00:06:34.138653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293679 ] 00:22:59.789 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.789 [2024-07-16 00:06:34.198836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.789 [2024-07-16 00:06:34.286432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.048 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:00.048 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:00.048 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.M6eNNvCF6f 00:23:00.308 [2024-07-16 00:06:34.665784] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.308 [2024-07-16 00:06:34.665917] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:00.308 [2024-07-16 00:06:34.674935] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.308 [2024-07-16 00:06:34.674969] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.308 [2024-07-16 00:06:34.675012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.308 [2024-07-16 00:06:34.675113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25446c0 (107): Transport endpoint is not connected 00:23:00.308 [2024-07-16 00:06:34.676112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25446c0 (9): Bad file descriptor 00:23:00.308 [2024-07-16 00:06:34.677103] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:00.308 [2024-07-16 00:06:34.677123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.308 [2024-07-16 00:06:34.677147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:00.308 request: 00:23:00.308 { 00:23:00.308 "name": "TLSTEST", 00:23:00.308 "trtype": "tcp", 00:23:00.308 "traddr": "10.0.0.2", 00:23:00.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.308 "adrfam": "ipv4", 00:23:00.308 "trsvcid": "4420", 00:23:00.308 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.308 "psk": "/tmp/tmp.M6eNNvCF6f", 00:23:00.308 "method": "bdev_nvme_attach_controller", 00:23:00.308 "req_id": 1 00:23:00.308 } 00:23:00.308 Got JSON-RPC error response 00:23:00.308 response: 00:23:00.308 { 00:23:00.308 "code": -5, 00:23:00.308 "message": "Input/output error" 00:23:00.308 } 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1293679 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1293679 ']' 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1293679 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293679 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293679' 00:23:00.308 killing process with pid 1293679 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1293679 00:23:00.308 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.308 00:23:00.308 Latency(us) 00:23:00.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.308 =================================================================================================================== 00:23:00.308 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.308 [2024-07-16 00:06:34.724515] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:00.308 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1293679 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1293769 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1293769 /var/tmp/bdevperf.sock 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1293769 ']' 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.568 00:06:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.568 [2024-07-16 00:06:34.932675] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:00.568 [2024-07-16 00:06:34.932775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293769 ] 00:23:00.568 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.568 [2024-07-16 00:06:34.992910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.827 [2024-07-16 00:06:35.083995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.827 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:00.827 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:00.827 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:01.086 [2024-07-16 00:06:35.467008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:01.086 [2024-07-16 00:06:35.468860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cdeaa0 (9): Bad file descriptor 00:23:01.086 [2024-07-16 00:06:35.469840] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.086 [2024-07-16 00:06:35.469875] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:01.086 [2024-07-16 00:06:35.469894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.086 request: 00:23:01.086 { 00:23:01.086 "name": "TLSTEST", 00:23:01.086 "trtype": "tcp", 00:23:01.086 "traddr": "10.0.0.2", 00:23:01.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.086 "adrfam": "ipv4", 00:23:01.086 "trsvcid": "4420", 00:23:01.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.086 "method": "bdev_nvme_attach_controller", 00:23:01.086 "req_id": 1 00:23:01.086 } 00:23:01.086 Got JSON-RPC error response 00:23:01.086 response: 00:23:01.086 { 00:23:01.086 "code": -5, 00:23:01.086 "message": "Input/output error" 00:23:01.086 } 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1293769 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1293769 ']' 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1293769 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293769 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293769' 00:23:01.086 killing process with pid 1293769 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1293769 00:23:01.086 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.086 00:23:01.086 Latency(us) 00:23:01.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.086 =================================================================================================================== 00:23:01.086 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.086 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1293769 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1290617 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1290617 ']' 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1290617 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1290617 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1290617' 00:23:01.345 killing process with pid 1290617 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1290617 00:23:01.345 [2024-07-16 00:06:35.701202] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:01.345 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1290617 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.QwikHWGTmN 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.QwikHWGTmN 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1293883 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1293883 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1293883 ']' 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.604 00:06:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.604 [2024-07-16 00:06:35.989591] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:01.604 [2024-07-16 00:06:35.989694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.604 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.604 [2024-07-16 00:06:36.054990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.862 [2024-07-16 00:06:36.144481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.862 [2024-07-16 00:06:36.144535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.862 [2024-07-16 00:06:36.144551] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.862 [2024-07-16 00:06:36.144564] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.862 [2024-07-16 00:06:36.144576] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.862 [2024-07-16 00:06:36.144608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QwikHWGTmN 00:23:01.862 00:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.120 [2024-07-16 00:06:36.549112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.120 00:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:02.378 00:06:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:02.636 [2024-07-16 00:06:37.138669] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:02.636 [2024-07-16 00:06:37.138892] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.894 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:02.894 malloc0 00:23:02.894 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.152 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:03.410 [2024-07-16 00:06:37.866718] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QwikHWGTmN 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QwikHWGTmN' 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1294091 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1294091 /var/tmp/bdevperf.sock 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1294091 ']' 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:03.410 00:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.669 [2024-07-16 00:06:37.932243] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:03.669 [2024-07-16 00:06:37.932337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294091 ] 00:23:03.669 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.669 [2024-07-16 00:06:37.986696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.669 [2024-07-16 00:06:38.073621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.669 00:06:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:03.669 00:06:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:03.669 00:06:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:03.927 [2024-07-16 00:06:38.386265] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:03.927 [2024-07-16 00:06:38.386396] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:04.185 TLSTESTn1 00:23:04.185 00:06:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.185 Running I/O for 10 seconds... 00:23:14.202 00:23:14.202 Latency(us) 00:23:14.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.202 Verification LBA range: start 0x0 length 0x2000 00:23:14.202 TLSTESTn1 : 10.02 3118.21 12.18 0.00 0.00 40965.86 8349.77 38641.97 00:23:14.202 =================================================================================================================== 00:23:14.202 Total : 3118.21 12.18 0.00 0.00 40965.86 8349.77 38641.97 00:23:14.202 0 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1294091 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1294091 ']' 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1294091 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1294091 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1294091' 00:23:14.202 killing process with pid 1294091 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1294091 00:23:14.202 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.202 00:23:14.202 Latency(us) 00:23:14.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.202 =================================================================================================================== 00:23:14.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.202 [2024-07-16 00:06:48.665002] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.202 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1294091 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.QwikHWGTmN 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QwikHWGTmN 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QwikHWGTmN 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QwikHWGTmN 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QwikHWGTmN' 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1295043 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1295043 /var/tmp/bdevperf.sock 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295043 ']' 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.460 00:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.460 [2024-07-16 00:06:48.882526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:14.460 [2024-07-16 00:06:48.882630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295043 ] 00:23:14.460 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.460 [2024-07-16 00:06:48.942454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.718 [2024-07-16 00:06:49.030032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.718 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.718 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.718 00:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:14.976 [2024-07-16 00:06:49.409189] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.976 [2024-07-16 00:06:49.409274] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:14.976 [2024-07-16 00:06:49.409292] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.QwikHWGTmN 00:23:14.976 request: 00:23:14.976 { 00:23:14.976 "name": "TLSTEST", 00:23:14.976 "trtype": "tcp", 00:23:14.976 "traddr": "10.0.0.2", 00:23:14.976 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.976 "adrfam": "ipv4", 00:23:14.976 "trsvcid": "4420", 00:23:14.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.976 "psk": "/tmp/tmp.QwikHWGTmN", 00:23:14.976 "method": "bdev_nvme_attach_controller", 00:23:14.976 "req_id": 1 00:23:14.976 } 00:23:14.976 Got JSON-RPC error response 00:23:14.976 response: 00:23:14.976 { 00:23:14.976 "code": -1, 00:23:14.976 "message": "Operation not permitted" 00:23:14.976 } 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1295043 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295043 ']' 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295043 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295043 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295043' 00:23:14.976 killing process with pid 1295043 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295043 00:23:14.976 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.976 00:23:14.976 Latency(us) 00:23:14.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.976 =================================================================================================================== 00:23:14.976 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.976 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295043 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1293883 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1293883 ']' 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1293883 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293883 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293883' 00:23:15.234 killing process with pid 1293883 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1293883 00:23:15.234 [2024-07-16 00:06:49.641878] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:15.234 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1293883 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1295161 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1295161 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295161 ']' 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:15.492 00:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.492 [2024-07-16 00:06:49.871499] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:15.492 [2024-07-16 00:06:49.871590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.492 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.492 [2024-07-16 00:06:49.936074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.749 [2024-07-16 00:06:50.022031] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.749 [2024-07-16 00:06:50.022088] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.749 [2024-07-16 00:06:50.022112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.749 [2024-07-16 00:06:50.022127] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.750 [2024-07-16 00:06:50.022149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.750 [2024-07-16 00:06:50.022179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QwikHWGTmN 00:23:15.750 00:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.007 [2024-07-16 00:06:50.425563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.007 00:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.265 00:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.523 [2024-07-16 00:06:51.003158] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.523 [2024-07-16 00:06:51.003397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.523 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.781 malloc0 00:23:16.781 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.038 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:17.295 [2024-07-16 00:06:51.735518] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:17.295 [2024-07-16 00:06:51.735557] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:17.295 [2024-07-16 00:06:51.735598] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:17.295 request: 00:23:17.295 { 00:23:17.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.295 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.295 "psk": "/tmp/tmp.QwikHWGTmN", 00:23:17.295 "method": "nvmf_subsystem_add_host", 00:23:17.295 "req_id": 1 00:23:17.295 } 00:23:17.295 Got JSON-RPC error response 00:23:17.295 response: 00:23:17.295 { 00:23:17.295 "code": -32603, 00:23:17.295 "message": "Internal error" 00:23:17.295 } 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1295161 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295161 ']' 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295161 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295161 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:17.295 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:17.296 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295161' 00:23:17.296 killing process with pid 1295161 00:23:17.296 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295161 00:23:17.296 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295161 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.QwikHWGTmN 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1295380 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1295380 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295380 ']' 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.563 00:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.563 [2024-07-16 00:06:52.002782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:17.563 [2024-07-16 00:06:52.002872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.563 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.563 [2024-07-16 00:06:52.061618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.832 [2024-07-16 00:06:52.151250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.832 [2024-07-16 00:06:52.151306] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.832 [2024-07-16 00:06:52.151322] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.832 [2024-07-16 00:06:52.151336] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.832 [2024-07-16 00:06:52.151349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.832 [2024-07-16 00:06:52.151380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QwikHWGTmN 00:23:17.832 00:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.095 [2024-07-16 00:06:52.560296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.095 00:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:18.660 00:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.660 [2024-07-16 00:06:53.153868] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.660 [2024-07-16 00:06:53.154101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.918 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.175 malloc0 00:23:19.175 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.433 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:19.433 [2024-07-16 00:06:53.942480] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1295595 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1295595 /var/tmp/bdevperf.sock 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295595 ']' 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:19.692 00:06:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.692 [2024-07-16 00:06:54.005894] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:19.692 [2024-07-16 00:06:54.005988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295595 ] 00:23:19.692 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.692 [2024-07-16 00:06:54.059627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.692 [2024-07-16 00:06:54.146726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.950 00:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.950 00:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:19.950 00:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:20.208 [2024-07-16 00:06:54.467659] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.208 [2024-07-16 00:06:54.467787] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:20.208 TLSTESTn1 00:23:20.208 00:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:20.466 00:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:20.466 "subsystems": [ 00:23:20.466 { 00:23:20.466 "subsystem": "keyring", 00:23:20.466 "config": [] 00:23:20.466 }, 00:23:20.466 { 00:23:20.466 "subsystem": "iobuf", 00:23:20.466 "config": [ 00:23:20.466 { 00:23:20.466 "method": "iobuf_set_options", 00:23:20.466 "params": { 00:23:20.467 "small_pool_count": 8192, 00:23:20.467 "large_pool_count": 1024, 00:23:20.467 "small_bufsize": 8192, 00:23:20.467 "large_bufsize": 135168 00:23:20.467 } 00:23:20.467 } 00:23:20.467 ] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "sock", 00:23:20.467 "config": [ 00:23:20.467 { 00:23:20.467 "method": "sock_set_default_impl", 00:23:20.467 "params": { 00:23:20.467 "impl_name": "posix" 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "sock_impl_set_options", 00:23:20.467 "params": { 00:23:20.467 "impl_name": "ssl", 00:23:20.467 "recv_buf_size": 4096, 00:23:20.467 "send_buf_size": 4096, 00:23:20.467 "enable_recv_pipe": true, 00:23:20.467 "enable_quickack": false, 00:23:20.467 "enable_placement_id": 0, 00:23:20.467 "enable_zerocopy_send_server": true, 00:23:20.467 "enable_zerocopy_send_client": false, 00:23:20.467 "zerocopy_threshold": 0, 00:23:20.467 "tls_version": 0, 00:23:20.467 "enable_ktls": false 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "sock_impl_set_options", 00:23:20.467 "params": { 00:23:20.467 "impl_name": "posix", 00:23:20.467 "recv_buf_size": 2097152, 00:23:20.467 "send_buf_size": 2097152, 00:23:20.467 "enable_recv_pipe": true, 00:23:20.467 "enable_quickack": false, 00:23:20.467 "enable_placement_id": 0, 00:23:20.467 "enable_zerocopy_send_server": true, 00:23:20.467 "enable_zerocopy_send_client": false, 00:23:20.467 "zerocopy_threshold": 0, 00:23:20.467 "tls_version": 0, 00:23:20.467 "enable_ktls": false 00:23:20.467 } 00:23:20.467 } 00:23:20.467 ] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "vmd", 00:23:20.467 "config": [] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "accel", 00:23:20.467 "config": [ 00:23:20.467 { 00:23:20.467 "method": "accel_set_options", 00:23:20.467 "params": { 00:23:20.467 "small_cache_size": 128, 00:23:20.467 "large_cache_size": 16, 00:23:20.467 "task_count": 2048, 00:23:20.467 "sequence_count": 2048, 00:23:20.467 "buf_count": 2048 00:23:20.467 } 00:23:20.467 } 00:23:20.467 ] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "bdev", 00:23:20.467 "config": [ 00:23:20.467 { 00:23:20.467 "method": "bdev_set_options", 00:23:20.467 "params": { 00:23:20.467 "bdev_io_pool_size": 65535, 00:23:20.467 "bdev_io_cache_size": 256, 00:23:20.467 "bdev_auto_examine": true, 00:23:20.467 "iobuf_small_cache_size": 128, 00:23:20.467 "iobuf_large_cache_size": 16 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_raid_set_options", 00:23:20.467 "params": { 00:23:20.467 "process_window_size_kb": 1024 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_iscsi_set_options", 00:23:20.467 "params": { 00:23:20.467 "timeout_sec": 30 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_nvme_set_options", 00:23:20.467 "params": { 00:23:20.467 "action_on_timeout": "none", 00:23:20.467 "timeout_us": 0, 00:23:20.467 "timeout_admin_us": 0, 00:23:20.467 "keep_alive_timeout_ms": 10000, 00:23:20.467 "arbitration_burst": 0, 00:23:20.467 "low_priority_weight": 0, 00:23:20.467 "medium_priority_weight": 0, 00:23:20.467 "high_priority_weight": 0, 00:23:20.467 "nvme_adminq_poll_period_us": 10000, 00:23:20.467 "nvme_ioq_poll_period_us": 0, 00:23:20.467 "io_queue_requests": 0, 00:23:20.467 "delay_cmd_submit": true, 00:23:20.467 "transport_retry_count": 4, 00:23:20.467 "bdev_retry_count": 3, 00:23:20.467 "transport_ack_timeout": 0, 00:23:20.467 "ctrlr_loss_timeout_sec": 0, 00:23:20.467 "reconnect_delay_sec": 0, 00:23:20.467 "fast_io_fail_timeout_sec": 0, 00:23:20.467 "disable_auto_failback": false, 00:23:20.467 "generate_uuids": false, 00:23:20.467 "transport_tos": 0, 00:23:20.467 "nvme_error_stat": false, 00:23:20.467 "rdma_srq_size": 0, 00:23:20.467 "io_path_stat": false, 00:23:20.467 "allow_accel_sequence": false, 00:23:20.467 "rdma_max_cq_size": 0, 00:23:20.467 "rdma_cm_event_timeout_ms": 0, 00:23:20.467 "dhchap_digests": [ 00:23:20.467 "sha256", 00:23:20.467 "sha384", 00:23:20.467 "sha512" 00:23:20.467 ], 00:23:20.467 "dhchap_dhgroups": [ 00:23:20.467 "null", 00:23:20.467 "ffdhe2048", 00:23:20.467 "ffdhe3072", 00:23:20.467 "ffdhe4096", 00:23:20.467 "ffdhe6144", 00:23:20.467 "ffdhe8192" 00:23:20.467 ] 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_nvme_set_hotplug", 00:23:20.467 "params": { 00:23:20.467 "period_us": 100000, 00:23:20.467 "enable": false 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_malloc_create", 00:23:20.467 "params": { 00:23:20.467 "name": "malloc0", 00:23:20.467 "num_blocks": 8192, 00:23:20.467 "block_size": 4096, 00:23:20.467 "physical_block_size": 4096, 00:23:20.467 "uuid": "2479bf01-ac5e-4868-8aac-c5de45702a22", 00:23:20.467 "optimal_io_boundary": 0 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "bdev_wait_for_examine" 00:23:20.467 } 00:23:20.467 ] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "nbd", 00:23:20.467 "config": [] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "scheduler", 00:23:20.467 "config": [ 00:23:20.467 { 00:23:20.467 "method": "framework_set_scheduler", 00:23:20.467 "params": { 00:23:20.467 "name": "static" 00:23:20.467 } 00:23:20.467 } 00:23:20.467 ] 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "subsystem": "nvmf", 00:23:20.467 "config": [ 00:23:20.467 { 00:23:20.467 "method": "nvmf_set_config", 00:23:20.467 "params": { 00:23:20.467 "discovery_filter": "match_any", 00:23:20.467 "admin_cmd_passthru": { 00:23:20.467 "identify_ctrlr": false 00:23:20.467 } 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_set_max_subsystems", 00:23:20.467 "params": { 00:23:20.467 "max_subsystems": 1024 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_set_crdt", 00:23:20.467 "params": { 00:23:20.467 "crdt1": 0, 00:23:20.467 "crdt2": 0, 00:23:20.467 "crdt3": 0 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_create_transport", 00:23:20.467 "params": { 00:23:20.467 "trtype": "TCP", 00:23:20.467 "max_queue_depth": 128, 00:23:20.467 "max_io_qpairs_per_ctrlr": 127, 00:23:20.467 "in_capsule_data_size": 4096, 00:23:20.467 "max_io_size": 131072, 00:23:20.467 "io_unit_size": 131072, 00:23:20.467 "max_aq_depth": 128, 00:23:20.467 "num_shared_buffers": 511, 00:23:20.467 "buf_cache_size": 4294967295, 00:23:20.467 "dif_insert_or_strip": false, 00:23:20.467 "zcopy": false, 00:23:20.467 "c2h_success": false, 00:23:20.467 "sock_priority": 0, 00:23:20.467 "abort_timeout_sec": 1, 00:23:20.467 "ack_timeout": 0, 00:23:20.467 "data_wr_pool_size": 0 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_create_subsystem", 00:23:20.467 "params": { 00:23:20.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.467 "allow_any_host": false, 00:23:20.467 "serial_number": "SPDK00000000000001", 00:23:20.467 "model_number": "SPDK bdev Controller", 00:23:20.467 "max_namespaces": 10, 00:23:20.467 "min_cntlid": 1, 00:23:20.467 "max_cntlid": 65519, 00:23:20.467 "ana_reporting": false 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_subsystem_add_host", 00:23:20.467 "params": { 00:23:20.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.467 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.467 "psk": "/tmp/tmp.QwikHWGTmN" 00:23:20.467 } 00:23:20.467 }, 00:23:20.467 { 00:23:20.467 "method": "nvmf_subsystem_add_ns", 00:23:20.467 "params": { 00:23:20.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.467 "namespace": { 00:23:20.467 "nsid": 1, 00:23:20.467 "bdev_name": "malloc0", 00:23:20.468 "nguid": "2479BF01AC5E48688AACC5DE45702A22", 00:23:20.468 "uuid": "2479bf01-ac5e-4868-8aac-c5de45702a22", 00:23:20.468 "no_auto_visible": false 00:23:20.468 } 00:23:20.468 } 00:23:20.468 }, 00:23:20.468 { 00:23:20.468 "method": "nvmf_subsystem_add_listener", 00:23:20.468 "params": { 00:23:20.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.468 "listen_address": { 00:23:20.468 "trtype": "TCP", 00:23:20.468 "adrfam": "IPv4", 00:23:20.468 "traddr": "10.0.0.2", 00:23:20.468 "trsvcid": "4420" 00:23:20.468 }, 00:23:20.468 "secure_channel": true 00:23:20.468 } 00:23:20.468 } 00:23:20.468 ] 00:23:20.468 } 00:23:20.468 ] 00:23:20.468 }' 00:23:20.468 00:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:20.726 00:06:55 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:20.726 "subsystems": [ 00:23:20.726 { 00:23:20.726 "subsystem": "keyring", 00:23:20.726 "config": [] 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "subsystem": "iobuf", 00:23:20.726 "config": [ 00:23:20.726 { 00:23:20.726 "method": "iobuf_set_options", 00:23:20.726 "params": { 00:23:20.726 "small_pool_count": 8192, 00:23:20.726 "large_pool_count": 1024, 00:23:20.726 "small_bufsize": 8192, 00:23:20.726 "large_bufsize": 135168 00:23:20.726 } 00:23:20.726 } 00:23:20.726 ] 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "subsystem": "sock", 00:23:20.726 "config": [ 00:23:20.726 { 00:23:20.726 "method": "sock_set_default_impl", 00:23:20.726 "params": { 00:23:20.726 "impl_name": "posix" 00:23:20.726 } 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "method": "sock_impl_set_options", 00:23:20.726 "params": { 00:23:20.726 "impl_name": "ssl", 00:23:20.726 "recv_buf_size": 4096, 00:23:20.726 "send_buf_size": 4096, 00:23:20.726 "enable_recv_pipe": true, 00:23:20.726 "enable_quickack": false, 00:23:20.726 "enable_placement_id": 0, 00:23:20.726 "enable_zerocopy_send_server": true, 00:23:20.726 "enable_zerocopy_send_client": false, 00:23:20.726 "zerocopy_threshold": 0, 00:23:20.726 "tls_version": 0, 00:23:20.726 "enable_ktls": false 00:23:20.726 } 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "method": "sock_impl_set_options", 00:23:20.726 "params": { 00:23:20.726 "impl_name": "posix", 00:23:20.726 "recv_buf_size": 2097152, 00:23:20.726 "send_buf_size": 2097152, 00:23:20.726 "enable_recv_pipe": true, 00:23:20.726 "enable_quickack": false, 00:23:20.726 "enable_placement_id": 0, 00:23:20.726 "enable_zerocopy_send_server": true, 00:23:20.726 "enable_zerocopy_send_client": false, 00:23:20.726 "zerocopy_threshold": 0, 00:23:20.726 "tls_version": 0, 00:23:20.726 "enable_ktls": false 00:23:20.726 } 00:23:20.726 } 00:23:20.726 ] 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "subsystem": "vmd", 00:23:20.726 "config": [] 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "subsystem": "accel", 00:23:20.726 "config": [ 00:23:20.726 { 00:23:20.726 "method": "accel_set_options", 00:23:20.726 "params": { 00:23:20.726 "small_cache_size": 128, 00:23:20.726 "large_cache_size": 16, 00:23:20.726 "task_count": 2048, 00:23:20.726 "sequence_count": 2048, 00:23:20.726 "buf_count": 2048 00:23:20.726 } 00:23:20.726 } 00:23:20.726 ] 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "subsystem": "bdev", 00:23:20.726 "config": [ 00:23:20.726 { 00:23:20.726 "method": "bdev_set_options", 00:23:20.726 "params": { 00:23:20.726 "bdev_io_pool_size": 65535, 00:23:20.726 "bdev_io_cache_size": 256, 00:23:20.726 "bdev_auto_examine": true, 00:23:20.726 "iobuf_small_cache_size": 128, 00:23:20.726 "iobuf_large_cache_size": 16 00:23:20.726 } 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "method": "bdev_raid_set_options", 00:23:20.726 "params": { 00:23:20.726 "process_window_size_kb": 1024 00:23:20.726 } 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "method": "bdev_iscsi_set_options", 00:23:20.726 "params": { 00:23:20.726 "timeout_sec": 30 00:23:20.726 } 00:23:20.726 }, 00:23:20.726 { 00:23:20.726 "method": "bdev_nvme_set_options", 00:23:20.726 "params": { 00:23:20.726 "action_on_timeout": "none", 00:23:20.726 "timeout_us": 0, 00:23:20.726 "timeout_admin_us": 0, 00:23:20.726 "keep_alive_timeout_ms": 10000, 00:23:20.726 "arbitration_burst": 0, 00:23:20.726 "low_priority_weight": 0, 00:23:20.726 "medium_priority_weight": 0, 00:23:20.726 "high_priority_weight": 0, 00:23:20.726 "nvme_adminq_poll_period_us": 10000, 00:23:20.726 "nvme_ioq_poll_period_us": 0, 00:23:20.726 "io_queue_requests": 512, 00:23:20.726 "delay_cmd_submit": true, 00:23:20.726 "transport_retry_count": 4, 00:23:20.726 "bdev_retry_count": 3, 00:23:20.727 "transport_ack_timeout": 0, 00:23:20.727 "ctrlr_loss_timeout_sec": 0, 00:23:20.727 "reconnect_delay_sec": 0, 00:23:20.727 "fast_io_fail_timeout_sec": 0, 00:23:20.727 "disable_auto_failback": false, 00:23:20.727 "generate_uuids": false, 00:23:20.727 "transport_tos": 0, 00:23:20.727 "nvme_error_stat": false, 00:23:20.727 "rdma_srq_size": 0, 00:23:20.727 "io_path_stat": false, 00:23:20.727 "allow_accel_sequence": false, 00:23:20.727 "rdma_max_cq_size": 0, 00:23:20.727 "rdma_cm_event_timeout_ms": 0, 00:23:20.727 "dhchap_digests": [ 00:23:20.727 "sha256", 00:23:20.727 "sha384", 00:23:20.727 "sha512" 00:23:20.727 ], 00:23:20.727 "dhchap_dhgroups": [ 00:23:20.727 "null", 00:23:20.727 "ffdhe2048", 00:23:20.727 "ffdhe3072", 00:23:20.727 "ffdhe4096", 00:23:20.727 "ffdhe6144", 00:23:20.727 "ffdhe8192" 00:23:20.727 ] 00:23:20.727 } 00:23:20.727 }, 00:23:20.727 { 00:23:20.727 "method": "bdev_nvme_attach_controller", 00:23:20.727 "params": { 00:23:20.727 "name": "TLSTEST", 00:23:20.727 "trtype": "TCP", 00:23:20.727 "adrfam": "IPv4", 00:23:20.727 "traddr": "10.0.0.2", 00:23:20.727 "trsvcid": "4420", 00:23:20.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.727 "prchk_reftag": false, 00:23:20.727 "prchk_guard": false, 00:23:20.727 "ctrlr_loss_timeout_sec": 0, 00:23:20.727 "reconnect_delay_sec": 0, 00:23:20.727 "fast_io_fail_timeout_sec": 0, 00:23:20.727 "psk": "/tmp/tmp.QwikHWGTmN", 00:23:20.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.727 "hdgst": false, 00:23:20.727 "ddgst": false 00:23:20.727 } 00:23:20.727 }, 00:23:20.727 { 00:23:20.727 "method": "bdev_nvme_set_hotplug", 00:23:20.727 "params": { 00:23:20.727 "period_us": 100000, 00:23:20.727 "enable": false 00:23:20.727 } 00:23:20.727 }, 00:23:20.727 { 00:23:20.727 "method": "bdev_wait_for_examine" 00:23:20.727 } 00:23:20.727 ] 00:23:20.727 }, 00:23:20.727 { 00:23:20.727 "subsystem": "nbd", 00:23:20.727 "config": [] 00:23:20.727 } 00:23:20.727 ] 00:23:20.727 }' 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1295595 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295595 ']' 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295595 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295595 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295595' 00:23:20.727 killing process with pid 1295595 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295595 00:23:20.727 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.727 00:23:20.727 Latency(us) 00:23:20.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.727 =================================================================================================================== 00:23:20.727 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.727 [2024-07-16 00:06:55.207872] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.727 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295595 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1295380 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295380 ']' 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295380 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295380 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:20.985 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:20.986 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295380' 00:23:20.986 killing process with pid 1295380 00:23:20.986 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295380 00:23:20.986 [2024-07-16 00:06:55.400444] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:20.986 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295380 00:23:21.244 00:06:55 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:21.244 00:06:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:21.244 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:21.244 00:06:55 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:21.244 "subsystems": [ 00:23:21.244 { 00:23:21.244 "subsystem": "keyring", 00:23:21.244 "config": [] 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "subsystem": "iobuf", 00:23:21.244 "config": [ 00:23:21.244 { 00:23:21.244 "method": "iobuf_set_options", 00:23:21.244 "params": { 00:23:21.244 "small_pool_count": 8192, 00:23:21.244 "large_pool_count": 1024, 00:23:21.244 "small_bufsize": 8192, 00:23:21.244 "large_bufsize": 135168 00:23:21.244 } 00:23:21.244 } 00:23:21.244 ] 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "subsystem": "sock", 00:23:21.244 "config": [ 00:23:21.244 { 00:23:21.244 "method": "sock_set_default_impl", 00:23:21.244 "params": { 00:23:21.244 "impl_name": "posix" 00:23:21.244 } 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "method": "sock_impl_set_options", 00:23:21.244 "params": { 00:23:21.244 "impl_name": "ssl", 00:23:21.244 "recv_buf_size": 4096, 00:23:21.244 "send_buf_size": 4096, 00:23:21.244 "enable_recv_pipe": true, 00:23:21.244 "enable_quickack": false, 00:23:21.244 "enable_placement_id": 0, 00:23:21.244 "enable_zerocopy_send_server": true, 00:23:21.244 "enable_zerocopy_send_client": false, 00:23:21.244 "zerocopy_threshold": 0, 00:23:21.244 "tls_version": 0, 00:23:21.244 "enable_ktls": false 00:23:21.244 } 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "method": "sock_impl_set_options", 00:23:21.244 "params": { 00:23:21.244 "impl_name": "posix", 00:23:21.244 "recv_buf_size": 2097152, 00:23:21.244 "send_buf_size": 2097152, 00:23:21.244 "enable_recv_pipe": true, 00:23:21.244 "enable_quickack": false, 00:23:21.244 "enable_placement_id": 0, 00:23:21.244 "enable_zerocopy_send_server": true, 00:23:21.244 "enable_zerocopy_send_client": false, 00:23:21.244 "zerocopy_threshold": 0, 00:23:21.244 "tls_version": 0, 00:23:21.244 "enable_ktls": false 00:23:21.244 } 00:23:21.244 } 00:23:21.244 ] 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "subsystem": "vmd", 00:23:21.244 "config": [] 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "subsystem": "accel", 00:23:21.244 "config": [ 00:23:21.244 { 00:23:21.244 "method": "accel_set_options", 00:23:21.244 "params": { 00:23:21.244 "small_cache_size": 128, 00:23:21.244 "large_cache_size": 16, 00:23:21.244 "task_count": 2048, 00:23:21.244 "sequence_count": 2048, 00:23:21.244 "buf_count": 2048 00:23:21.244 } 00:23:21.244 } 00:23:21.244 ] 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "subsystem": "bdev", 00:23:21.244 "config": [ 00:23:21.244 { 00:23:21.244 "method": "bdev_set_options", 00:23:21.244 "params": { 00:23:21.244 "bdev_io_pool_size": 65535, 00:23:21.244 "bdev_io_cache_size": 256, 00:23:21.244 "bdev_auto_examine": true, 00:23:21.244 "iobuf_small_cache_size": 128, 00:23:21.244 "iobuf_large_cache_size": 16 00:23:21.244 } 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "method": "bdev_raid_set_options", 00:23:21.244 "params": { 00:23:21.244 "process_window_size_kb": 1024 00:23:21.244 } 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "method": "bdev_iscsi_set_options", 00:23:21.244 "params": { 00:23:21.244 "timeout_sec": 30 00:23:21.244 } 00:23:21.244 }, 00:23:21.244 { 00:23:21.244 "method": "bdev_nvme_set_options", 00:23:21.244 "params": { 00:23:21.244 "action_on_timeout": "none", 00:23:21.244 "timeout_us": 0, 00:23:21.244 "timeout_admin_us": 0, 00:23:21.244 "keep_alive_timeout_ms": 10000, 00:23:21.244 "arbitration_burst": 0, 00:23:21.244 "low_priority_weight": 0, 00:23:21.244 "medium_priority_weight": 0, 00:23:21.244 "high_priority_weight": 0, 00:23:21.244 "nvme_adminq_poll_period_us": 10000, 00:23:21.244 "nvme_ioq_poll_period_us": 0, 00:23:21.244 "io_queue_requests": 0, 00:23:21.244 "delay_cmd_submit": true, 00:23:21.244 "transport_retry_count": 4, 00:23:21.244 "bdev_retry_count": 3, 00:23:21.244 "transport_ack_timeout": 0, 00:23:21.244 "ctrlr_loss_timeout_sec": 0, 00:23:21.245 "reconnect_delay_sec": 0, 00:23:21.245 "fast_io_fail_timeout_sec": 0, 00:23:21.245 "disable_auto_failback": false, 00:23:21.245 "generate_uuids": false, 00:23:21.245 "transport_tos": 0, 00:23:21.245 "nvme_error_stat": false, 00:23:21.245 "rdma_srq_size": 0, 00:23:21.245 "io_path_stat": false, 00:23:21.245 "allow_accel_sequence": false, 00:23:21.245 "rdma_max_cq_size": 0, 00:23:21.245 "rdma_cm_event_timeout_ms": 0, 00:23:21.245 "dhchap_digests": [ 00:23:21.245 "sha256", 00:23:21.245 "sha384", 00:23:21.245 "sha512" 00:23:21.245 ], 00:23:21.245 "dhchap_dhgroups": [ 00:23:21.245 "null", 00:23:21.245 "ffdhe2048", 00:23:21.245 "ffdhe3072", 00:23:21.245 "ffdhe4096", 00:23:21.245 "ffdhe6144", 00:23:21.245 "ffdhe8192" 00:23:21.245 ] 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "bdev_nvme_set_hotplug", 00:23:21.245 "params": { 00:23:21.245 "period_us": 100000, 00:23:21.245 "enable": false 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "bdev_malloc_create", 00:23:21.245 "params": { 00:23:21.245 "name": "malloc0", 00:23:21.245 "num_blocks": 8192, 00:23:21.245 "block_size": 4096, 00:23:21.245 "physical_block_size": 4096, 00:23:21.245 "uuid": "2479bf01-ac5e-4868-8aac-c5de45702a22", 00:23:21.245 "optimal_io_boundary": 0 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "bdev_wait_for_examine" 00:23:21.245 } 00:23:21.245 ] 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "subsystem": "nbd", 00:23:21.245 "config": [] 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "subsystem": "scheduler", 00:23:21.245 "config": [ 00:23:21.245 { 00:23:21.245 "method": "framework_set_scheduler", 00:23:21.245 "params": { 00:23:21.245 "name": "static" 00:23:21.245 } 00:23:21.245 } 00:23:21.245 ] 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "subsystem": "nvmf", 00:23:21.245 "config": [ 00:23:21.245 { 00:23:21.245 "method": "nvmf_set_config", 00:23:21.245 "params": { 00:23:21.245 "discovery_filter": "match_any", 00:23:21.245 "admin_cmd_passthru": { 00:23:21.245 "identify_ctrlr": false 00:23:21.245 } 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_set_max_subsystems", 00:23:21.245 "params": { 00:23:21.245 "max_subsystems": 1024 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_set_crdt", 00:23:21.245 "params": { 00:23:21.245 "crdt1": 0, 00:23:21.245 "crdt2": 0, 00:23:21.245 "crdt3": 0 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_create_transport", 00:23:21.245 "params": { 00:23:21.245 "trtype": "TCP", 00:23:21.245 "max_queue_depth": 128, 00:23:21.245 "max_io_qpairs_per_ctrlr": 127, 00:23:21.245 "in_capsule_data_size": 4096, 00:23:21.245 "max_io_size": 131072, 00:23:21.245 "io_unit_size": 131072, 00:23:21.245 "max_aq_depth": 128, 00:23:21.245 "num_shared_buffers": 511, 00:23:21.245 "buf_cache_size": 4294967295, 00:23:21.245 "dif_insert_or_strip": false, 00:23:21.245 "zcopy": false, 00:23:21.245 "c2h_success": false, 00:23:21.245 "sock_priority": 0, 00:23:21.245 "abort_timeout_sec": 1, 00:23:21.245 "ack_timeout": 0, 00:23:21.245 "data_wr_pool_size": 0 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_create_subsystem", 00:23:21.245 "params": { 00:23:21.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.245 "allow_any_host": false, 00:23:21.245 "serial_number": "SPDK00000000000001", 00:23:21.245 "model_number": "SPDK bdev Controller", 00:23:21.245 "max_namespaces": 10, 00:23:21.245 "min_cntlid": 1, 00:23:21.245 "max_cntlid": 65519, 00:23:21.245 "ana_reporting": false 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_subsystem_add_host", 00:23:21.245 "params": { 00:23:21.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.245 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.245 "psk": "/tmp/tmp.QwikHWGTmN" 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_subsystem_add_ns", 00:23:21.245 "params": { 00:23:21.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.245 "namespace": { 00:23:21.245 "nsid": 1, 00:23:21.245 "bdev_name": "malloc0", 00:23:21.245 "nguid": "2479BF01AC5E48688AACC5DE45702A22", 00:23:21.245 "uuid": "2479bf01-ac5e-4868-8aac-c5de45702a22", 00:23:21.245 "no_auto_visible": false 00:23:21.245 } 00:23:21.245 } 00:23:21.245 }, 00:23:21.245 { 00:23:21.245 "method": "nvmf_subsystem_add_listener", 00:23:21.245 "params": { 00:23:21.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.245 "listen_address": { 00:23:21.245 "trtype": "TCP", 00:23:21.245 "adrfam": "IPv4", 00:23:21.245 "traddr": "10.0.0.2", 00:23:21.245 "trsvcid": "4420" 00:23:21.245 }, 00:23:21.245 "secure_channel": true 00:23:21.245 } 00:23:21.245 } 00:23:21.245 ] 00:23:21.245 } 00:23:21.245 ] 00:23:21.245 }' 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1295708 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1295708 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295708 ']' 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.245 00:06:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.245 [2024-07-16 00:06:55.631661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:21.245 [2024-07-16 00:06:55.631760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.245 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.245 [2024-07-16 00:06:55.696720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.502 [2024-07-16 00:06:55.786338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.502 [2024-07-16 00:06:55.786396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.502 [2024-07-16 00:06:55.786412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.502 [2024-07-16 00:06:55.786426] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.502 [2024-07-16 00:06:55.786438] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.502 [2024-07-16 00:06:55.786524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.502 [2024-07-16 00:06:56.007145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.759 [2024-07-16 00:06:56.023067] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:21.759 [2024-07-16 00:06:56.039130] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.759 [2024-07-16 00:06:56.056323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1295820 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1295820 /var/tmp/bdevperf.sock 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1295820 ']' 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.325 00:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:22.325 "subsystems": [ 00:23:22.325 { 00:23:22.325 "subsystem": "keyring", 00:23:22.325 "config": [] 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "subsystem": "iobuf", 00:23:22.325 "config": [ 00:23:22.325 { 00:23:22.325 "method": "iobuf_set_options", 00:23:22.325 "params": { 00:23:22.325 "small_pool_count": 8192, 00:23:22.325 "large_pool_count": 1024, 00:23:22.325 "small_bufsize": 8192, 00:23:22.325 "large_bufsize": 135168 00:23:22.325 } 00:23:22.325 } 00:23:22.325 ] 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "subsystem": "sock", 00:23:22.325 "config": [ 00:23:22.325 { 00:23:22.325 "method": "sock_set_default_impl", 00:23:22.325 "params": { 00:23:22.325 "impl_name": "posix" 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "sock_impl_set_options", 00:23:22.325 "params": { 00:23:22.325 "impl_name": "ssl", 00:23:22.325 "recv_buf_size": 4096, 00:23:22.325 "send_buf_size": 4096, 00:23:22.325 "enable_recv_pipe": true, 00:23:22.325 "enable_quickack": false, 00:23:22.325 "enable_placement_id": 0, 00:23:22.325 "enable_zerocopy_send_server": true, 00:23:22.325 "enable_zerocopy_send_client": false, 00:23:22.325 "zerocopy_threshold": 0, 00:23:22.325 "tls_version": 0, 00:23:22.325 "enable_ktls": false 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "sock_impl_set_options", 00:23:22.325 "params": { 00:23:22.325 "impl_name": "posix", 00:23:22.325 "recv_buf_size": 2097152, 00:23:22.325 "send_buf_size": 2097152, 00:23:22.325 "enable_recv_pipe": true, 00:23:22.325 "enable_quickack": false, 00:23:22.325 "enable_placement_id": 0, 00:23:22.325 "enable_zerocopy_send_server": true, 00:23:22.325 "enable_zerocopy_send_client": false, 00:23:22.325 "zerocopy_threshold": 0, 00:23:22.325 "tls_version": 0, 00:23:22.325 "enable_ktls": false 00:23:22.325 } 00:23:22.325 } 00:23:22.325 ] 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "subsystem": "vmd", 00:23:22.325 "config": [] 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "subsystem": "accel", 00:23:22.325 "config": [ 00:23:22.325 { 00:23:22.325 "method": "accel_set_options", 00:23:22.325 "params": { 00:23:22.325 "small_cache_size": 128, 00:23:22.325 "large_cache_size": 16, 00:23:22.325 "task_count": 2048, 00:23:22.325 "sequence_count": 2048, 00:23:22.325 "buf_count": 2048 00:23:22.325 } 00:23:22.325 } 00:23:22.325 ] 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "subsystem": "bdev", 00:23:22.325 "config": [ 00:23:22.325 { 00:23:22.325 "method": "bdev_set_options", 00:23:22.325 "params": { 00:23:22.325 "bdev_io_pool_size": 65535, 00:23:22.325 "bdev_io_cache_size": 256, 00:23:22.325 "bdev_auto_examine": true, 00:23:22.325 "iobuf_small_cache_size": 128, 00:23:22.325 "iobuf_large_cache_size": 16 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "bdev_raid_set_options", 00:23:22.325 "params": { 00:23:22.325 "process_window_size_kb": 1024 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "bdev_iscsi_set_options", 00:23:22.325 "params": { 00:23:22.325 "timeout_sec": 30 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "bdev_nvme_set_options", 00:23:22.325 "params": { 00:23:22.325 "action_on_timeout": "none", 00:23:22.325 "timeout_us": 0, 00:23:22.325 "timeout_admin_us": 0, 00:23:22.325 "keep_alive_timeout_ms": 10000, 00:23:22.325 "arbitration_burst": 0, 00:23:22.325 "low_priority_weight": 0, 00:23:22.325 "medium_priority_weight": 0, 00:23:22.325 "high_priority_weight": 0, 00:23:22.325 "nvme_adminq_poll_period_us": 10000, 00:23:22.325 "nvme_ioq_poll_period_us": 0, 00:23:22.325 "io_queue_requests": 512, 00:23:22.325 "delay_cmd_submit": true, 00:23:22.325 "transport_retry_count": 4, 00:23:22.325 "bdev_retry_count": 3, 00:23:22.325 "transport_ack_timeout": 0, 00:23:22.325 "ctrlr_loss_timeout_sec": 0, 00:23:22.325 "reconnect_delay_sec": 0, 00:23:22.325 "fast_io_fail_timeout_sec": 0, 00:23:22.325 "disable_auto_failback": false, 00:23:22.325 "generate_uuids": false, 00:23:22.325 "transport_tos": 0, 00:23:22.325 "nvme_error_stat": false, 00:23:22.325 "rdma_srq_size": 0, 00:23:22.325 "io_path_stat": false, 00:23:22.325 "allow_accel_sequence": false, 00:23:22.325 "rdma_max_cq_size": 0, 00:23:22.325 "rdma_cm_event_timeout_ms": 0, 00:23:22.325 "dhchap_digests": [ 00:23:22.325 "sha256", 00:23:22.325 "sha384", 00:23:22.325 "sha512" 00:23:22.325 ], 00:23:22.325 "dhchap_dhgroups": [ 00:23:22.325 "null", 00:23:22.325 "ffdhe2048", 00:23:22.325 "ffdhe3072", 00:23:22.325 "ffdhe4096", 00:23:22.325 "ffdhe6144", 00:23:22.325 "ffdhe8192" 00:23:22.325 ] 00:23:22.325 } 00:23:22.325 }, 00:23:22.325 { 00:23:22.325 "method": "bdev_nvme_attach_controller", 00:23:22.325 "params": { 00:23:22.325 "name": "TLSTEST", 00:23:22.325 "trtype": "TCP", 00:23:22.325 "adrfam": "IPv4", 00:23:22.325 "traddr": "10.0.0.2", 00:23:22.325 "trsvcid": "4420", 00:23:22.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.325 "prchk_reftag": false, 00:23:22.325 "prchk_guard": false, 00:23:22.326 "ctrlr_loss_timeout_sec": 0, 00:23:22.326 "reconnect_delay_sec": 0, 00:23:22.326 "fast_io_fail_timeout_sec": 0, 00:23:22.326 "psk": "/tmp/tmp.QwikHWGTmN", 00:23:22.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.326 "hdgst": false, 00:23:22.326 "ddgst": false 00:23:22.326 } 00:23:22.326 }, 00:23:22.326 { 00:23:22.326 "method": "bdev_nvme_set_hotplug", 00:23:22.326 "params": { 00:23:22.326 "period_us": 100000, 00:23:22.326 "enable": false 00:23:22.326 } 00:23:22.326 }, 00:23:22.326 { 00:23:22.326 "method": "bdev_wait_for_examine" 00:23:22.326 } 00:23:22.326 ] 00:23:22.326 }, 00:23:22.326 { 00:23:22.326 "subsystem": "nbd", 00:23:22.326 "config": [] 00:23:22.326 } 00:23:22.326 ] 00:23:22.326 }' 00:23:22.326 [2024-07-16 00:06:56.747521] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:22.326 [2024-07-16 00:06:56.747624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295820 ] 00:23:22.326 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.326 [2024-07-16 00:06:56.808189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.583 [2024-07-16 00:06:56.899885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.583 [2024-07-16 00:06:57.058574] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.583 [2024-07-16 00:06:57.058720] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:23.517 00:06:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:23.517 00:06:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:23.517 00:06:57 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.517 Running I/O for 10 seconds... 00:23:33.535 00:23:33.535 Latency(us) 00:23:33.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:33.535 Verification LBA range: start 0x0 length 0x2000 00:23:33.535 TLSTESTn1 : 10.03 3295.73 12.87 0.00 0.00 38757.72 10582.85 46215.02 00:23:33.535 =================================================================================================================== 00:23:33.535 Total : 3295.73 12.87 0.00 0.00 38757.72 10582.85 46215.02 00:23:33.535 0 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1295820 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295820 ']' 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295820 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295820 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:33.535 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295820' 00:23:33.535 killing process with pid 1295820 00:23:33.536 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295820 00:23:33.536 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.536 00:23:33.536 Latency(us) 00:23:33.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.536 =================================================================================================================== 00:23:33.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.536 [2024-07-16 00:07:07.998886] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:33.536 00:07:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295820 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1295708 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1295708 ']' 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1295708 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1295708 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1295708' 00:23:33.793 killing process with pid 1295708 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1295708 00:23:33.793 [2024-07-16 00:07:08.192670] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.793 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1295708 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1296884 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1296884 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1296884 ']' 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:34.051 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.051 [2024-07-16 00:07:08.420893] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:34.051 [2024-07-16 00:07:08.420994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.051 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.051 [2024-07-16 00:07:08.487456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.309 [2024-07-16 00:07:08.576372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.309 [2024-07-16 00:07:08.576427] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.309 [2024-07-16 00:07:08.576443] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.309 [2024-07-16 00:07:08.576456] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.309 [2024-07-16 00:07:08.576468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.309 [2024-07-16 00:07:08.576506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.QwikHWGTmN 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QwikHWGTmN 00:23:34.309 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.566 [2024-07-16 00:07:08.963609] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.566 00:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.823 00:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:35.079 [2024-07-16 00:07:09.541190] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.079 [2024-07-16 00:07:09.541426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.079 00:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.336 malloc0 00:23:35.594 00:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.851 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QwikHWGTmN 00:23:36.108 [2024-07-16 00:07:10.438064] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1297057 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1297057 /var/tmp/bdevperf.sock 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1297057 ']' 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:36.108 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.108 [2024-07-16 00:07:10.503577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:36.108 [2024-07-16 00:07:10.503676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297057 ] 00:23:36.108 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.108 [2024-07-16 00:07:10.577237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.365 [2024-07-16 00:07:10.683040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.365 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:36.365 00:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:36.365 00:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QwikHWGTmN 00:23:36.623 00:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.880 [2024-07-16 00:07:11.374002] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.138 nvme0n1 00:23:37.138 00:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.138 Running I/O for 1 seconds... 00:23:38.512 00:23:38.512 Latency(us) 00:23:38.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.512 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:38.512 Verification LBA range: start 0x0 length 0x2000 00:23:38.512 nvme0n1 : 1.02 3145.96 12.29 0.00 0.00 40245.09 8349.77 38059.43 00:23:38.512 =================================================================================================================== 00:23:38.512 Total : 3145.96 12.29 0.00 0.00 40245.09 8349.77 38059.43 00:23:38.512 0 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1297057 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1297057 ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1297057 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297057 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297057' 00:23:38.512 killing process with pid 1297057 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1297057 00:23:38.512 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.512 00:23:38.512 Latency(us) 00:23:38.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.512 =================================================================================================================== 00:23:38.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1297057 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1296884 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1296884 ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1296884 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1296884 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1296884' 00:23:38.512 killing process with pid 1296884 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1296884 00:23:38.512 [2024-07-16 00:07:12.831524] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:38.512 00:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1296884 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1297301 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1297301 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1297301 ']' 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.512 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.770 [2024-07-16 00:07:13.064058] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:38.770 [2024-07-16 00:07:13.064136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.770 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.770 [2024-07-16 00:07:13.127526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.770 [2024-07-16 00:07:13.213278] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.770 [2024-07-16 00:07:13.213338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.770 [2024-07-16 00:07:13.213355] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.770 [2024-07-16 00:07:13.213369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.770 [2024-07-16 00:07:13.213382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.770 [2024-07-16 00:07:13.213413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.028 [2024-07-16 00:07:13.343371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.028 malloc0 00:23:39.028 [2024-07-16 00:07:13.373949] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.028 [2024-07-16 00:07:13.374202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1297327 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1297327 /var/tmp/bdevperf.sock 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1297327 ']' 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.028 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.028 [2024-07-16 00:07:13.448410] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:39.028 [2024-07-16 00:07:13.448507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297327 ] 00:23:39.028 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.028 [2024-07-16 00:07:13.508949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.285 [2024-07-16 00:07:13.596344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.285 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.285 00:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:39.285 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QwikHWGTmN 00:23:39.543 00:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:39.801 [2024-07-16 00:07:14.272426] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.059 nvme0n1 00:23:40.059 00:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.059 Running I/O for 1 seconds... 00:23:41.432 00:23:41.432 Latency(us) 00:23:41.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.432 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:41.432 Verification LBA range: start 0x0 length 0x2000 00:23:41.432 nvme0n1 : 1.03 3155.63 12.33 0.00 0.00 39856.37 10485.76 42331.40 00:23:41.432 =================================================================================================================== 00:23:41.432 Total : 3155.63 12.33 0.00 0.00 39856.37 10485.76 42331.40 00:23:41.432 0 00:23:41.432 00:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:41.432 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.432 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.432 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.432 00:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:41.432 "subsystems": [ 00:23:41.432 { 00:23:41.432 "subsystem": "keyring", 00:23:41.432 "config": [ 00:23:41.432 { 00:23:41.432 "method": "keyring_file_add_key", 00:23:41.432 "params": { 00:23:41.432 "name": "key0", 00:23:41.432 "path": "/tmp/tmp.QwikHWGTmN" 00:23:41.432 } 00:23:41.432 } 00:23:41.432 ] 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "subsystem": "iobuf", 00:23:41.432 "config": [ 00:23:41.432 { 00:23:41.432 "method": "iobuf_set_options", 00:23:41.432 "params": { 00:23:41.432 "small_pool_count": 8192, 00:23:41.432 "large_pool_count": 1024, 00:23:41.432 "small_bufsize": 8192, 00:23:41.432 "large_bufsize": 135168 00:23:41.432 } 00:23:41.432 } 00:23:41.432 ] 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "subsystem": "sock", 00:23:41.432 "config": [ 00:23:41.432 { 00:23:41.432 "method": "sock_set_default_impl", 00:23:41.432 "params": { 00:23:41.432 "impl_name": "posix" 00:23:41.432 } 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "method": "sock_impl_set_options", 00:23:41.432 "params": { 00:23:41.432 "impl_name": "ssl", 00:23:41.432 "recv_buf_size": 4096, 00:23:41.432 "send_buf_size": 4096, 00:23:41.432 "enable_recv_pipe": true, 00:23:41.432 "enable_quickack": false, 00:23:41.432 "enable_placement_id": 0, 00:23:41.432 "enable_zerocopy_send_server": true, 00:23:41.432 "enable_zerocopy_send_client": false, 00:23:41.432 "zerocopy_threshold": 0, 00:23:41.432 "tls_version": 0, 00:23:41.432 "enable_ktls": false 00:23:41.432 } 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "method": "sock_impl_set_options", 00:23:41.432 "params": { 00:23:41.432 "impl_name": "posix", 00:23:41.432 "recv_buf_size": 2097152, 00:23:41.432 "send_buf_size": 2097152, 00:23:41.432 "enable_recv_pipe": true, 00:23:41.432 "enable_quickack": false, 00:23:41.432 "enable_placement_id": 0, 00:23:41.432 "enable_zerocopy_send_server": true, 00:23:41.432 "enable_zerocopy_send_client": false, 00:23:41.432 "zerocopy_threshold": 0, 00:23:41.432 "tls_version": 0, 00:23:41.432 "enable_ktls": false 00:23:41.432 } 00:23:41.432 } 00:23:41.432 ] 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "subsystem": "vmd", 00:23:41.432 "config": [] 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "subsystem": "accel", 00:23:41.432 "config": [ 00:23:41.432 { 00:23:41.432 "method": "accel_set_options", 00:23:41.432 "params": { 00:23:41.432 "small_cache_size": 128, 00:23:41.432 "large_cache_size": 16, 00:23:41.432 "task_count": 2048, 00:23:41.432 "sequence_count": 2048, 00:23:41.432 "buf_count": 2048 00:23:41.432 } 00:23:41.432 } 00:23:41.432 ] 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "subsystem": "bdev", 00:23:41.432 "config": [ 00:23:41.432 { 00:23:41.432 "method": "bdev_set_options", 00:23:41.432 "params": { 00:23:41.432 "bdev_io_pool_size": 65535, 00:23:41.432 "bdev_io_cache_size": 256, 00:23:41.432 "bdev_auto_examine": true, 00:23:41.432 "iobuf_small_cache_size": 128, 00:23:41.432 "iobuf_large_cache_size": 16 00:23:41.432 } 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "method": "bdev_raid_set_options", 00:23:41.432 "params": { 00:23:41.432 "process_window_size_kb": 1024 00:23:41.432 } 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "method": "bdev_iscsi_set_options", 00:23:41.432 "params": { 00:23:41.432 "timeout_sec": 30 00:23:41.432 } 00:23:41.432 }, 00:23:41.432 { 00:23:41.432 "method": "bdev_nvme_set_options", 00:23:41.432 "params": { 00:23:41.432 "action_on_timeout": "none", 00:23:41.432 "timeout_us": 0, 00:23:41.432 "timeout_admin_us": 0, 00:23:41.432 "keep_alive_timeout_ms": 10000, 00:23:41.432 "arbitration_burst": 0, 00:23:41.432 "low_priority_weight": 0, 00:23:41.432 "medium_priority_weight": 0, 00:23:41.432 "high_priority_weight": 0, 00:23:41.432 "nvme_adminq_poll_period_us": 10000, 00:23:41.432 "nvme_ioq_poll_period_us": 0, 00:23:41.432 "io_queue_requests": 0, 00:23:41.432 "delay_cmd_submit": true, 00:23:41.432 "transport_retry_count": 4, 00:23:41.432 "bdev_retry_count": 3, 00:23:41.432 "transport_ack_timeout": 0, 00:23:41.432 "ctrlr_loss_timeout_sec": 0, 00:23:41.432 "reconnect_delay_sec": 0, 00:23:41.433 "fast_io_fail_timeout_sec": 0, 00:23:41.433 "disable_auto_failback": false, 00:23:41.433 "generate_uuids": false, 00:23:41.433 "transport_tos": 0, 00:23:41.433 "nvme_error_stat": false, 00:23:41.433 "rdma_srq_size": 0, 00:23:41.433 "io_path_stat": false, 00:23:41.433 "allow_accel_sequence": false, 00:23:41.433 "rdma_max_cq_size": 0, 00:23:41.433 "rdma_cm_event_timeout_ms": 0, 00:23:41.433 "dhchap_digests": [ 00:23:41.433 "sha256", 00:23:41.433 "sha384", 00:23:41.433 "sha512" 00:23:41.433 ], 00:23:41.433 "dhchap_dhgroups": [ 00:23:41.433 "null", 00:23:41.433 "ffdhe2048", 00:23:41.433 "ffdhe3072", 00:23:41.433 "ffdhe4096", 00:23:41.433 "ffdhe6144", 00:23:41.433 "ffdhe8192" 00:23:41.433 ] 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "bdev_nvme_set_hotplug", 00:23:41.433 "params": { 00:23:41.433 "period_us": 100000, 00:23:41.433 "enable": false 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "bdev_malloc_create", 00:23:41.433 "params": { 00:23:41.433 "name": "malloc0", 00:23:41.433 "num_blocks": 8192, 00:23:41.433 "block_size": 4096, 00:23:41.433 "physical_block_size": 4096, 00:23:41.433 "uuid": "49c6d3f9-3524-4929-b6df-2d7d2175ea92", 00:23:41.433 "optimal_io_boundary": 0 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "bdev_wait_for_examine" 00:23:41.433 } 00:23:41.433 ] 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "subsystem": "nbd", 00:23:41.433 "config": [] 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "subsystem": "scheduler", 00:23:41.433 "config": [ 00:23:41.433 { 00:23:41.433 "method": "framework_set_scheduler", 00:23:41.433 "params": { 00:23:41.433 "name": "static" 00:23:41.433 } 00:23:41.433 } 00:23:41.433 ] 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "subsystem": "nvmf", 00:23:41.433 "config": [ 00:23:41.433 { 00:23:41.433 "method": "nvmf_set_config", 00:23:41.433 "params": { 00:23:41.433 "discovery_filter": "match_any", 00:23:41.433 "admin_cmd_passthru": { 00:23:41.433 "identify_ctrlr": false 00:23:41.433 } 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_set_max_subsystems", 00:23:41.433 "params": { 00:23:41.433 "max_subsystems": 1024 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_set_crdt", 00:23:41.433 "params": { 00:23:41.433 "crdt1": 0, 00:23:41.433 "crdt2": 0, 00:23:41.433 "crdt3": 0 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_create_transport", 00:23:41.433 "params": { 00:23:41.433 "trtype": "TCP", 00:23:41.433 "max_queue_depth": 128, 00:23:41.433 "max_io_qpairs_per_ctrlr": 127, 00:23:41.433 "in_capsule_data_size": 4096, 00:23:41.433 "max_io_size": 131072, 00:23:41.433 "io_unit_size": 131072, 00:23:41.433 "max_aq_depth": 128, 00:23:41.433 "num_shared_buffers": 511, 00:23:41.433 "buf_cache_size": 4294967295, 00:23:41.433 "dif_insert_or_strip": false, 00:23:41.433 "zcopy": false, 00:23:41.433 "c2h_success": false, 00:23:41.433 "sock_priority": 0, 00:23:41.433 "abort_timeout_sec": 1, 00:23:41.433 "ack_timeout": 0, 00:23:41.433 "data_wr_pool_size": 0 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_create_subsystem", 00:23:41.433 "params": { 00:23:41.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.433 "allow_any_host": false, 00:23:41.433 "serial_number": "00000000000000000000", 00:23:41.433 "model_number": "SPDK bdev Controller", 00:23:41.433 "max_namespaces": 32, 00:23:41.433 "min_cntlid": 1, 00:23:41.433 "max_cntlid": 65519, 00:23:41.433 "ana_reporting": false 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_subsystem_add_host", 00:23:41.433 "params": { 00:23:41.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.433 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.433 "psk": "key0" 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_subsystem_add_ns", 00:23:41.433 "params": { 00:23:41.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.433 "namespace": { 00:23:41.433 "nsid": 1, 00:23:41.433 "bdev_name": "malloc0", 00:23:41.433 "nguid": "49C6D3F935244929B6DF2D7D2175EA92", 00:23:41.433 "uuid": "49c6d3f9-3524-4929-b6df-2d7d2175ea92", 00:23:41.433 "no_auto_visible": false 00:23:41.433 } 00:23:41.433 } 00:23:41.433 }, 00:23:41.433 { 00:23:41.433 "method": "nvmf_subsystem_add_listener", 00:23:41.433 "params": { 00:23:41.433 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.433 "listen_address": { 00:23:41.433 "trtype": "TCP", 00:23:41.433 "adrfam": "IPv4", 00:23:41.433 "traddr": "10.0.0.2", 00:23:41.433 "trsvcid": "4420" 00:23:41.433 }, 00:23:41.433 "secure_channel": true 00:23:41.433 } 00:23:41.433 } 00:23:41.433 ] 00:23:41.433 } 00:23:41.433 ] 00:23:41.433 }' 00:23:41.433 00:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:41.692 00:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:41.692 "subsystems": [ 00:23:41.692 { 00:23:41.692 "subsystem": "keyring", 00:23:41.692 "config": [ 00:23:41.692 { 00:23:41.692 "method": "keyring_file_add_key", 00:23:41.692 "params": { 00:23:41.692 "name": "key0", 00:23:41.692 "path": "/tmp/tmp.QwikHWGTmN" 00:23:41.692 } 00:23:41.692 } 00:23:41.692 ] 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "subsystem": "iobuf", 00:23:41.692 "config": [ 00:23:41.692 { 00:23:41.692 "method": "iobuf_set_options", 00:23:41.692 "params": { 00:23:41.692 "small_pool_count": 8192, 00:23:41.692 "large_pool_count": 1024, 00:23:41.692 "small_bufsize": 8192, 00:23:41.692 "large_bufsize": 135168 00:23:41.692 } 00:23:41.692 } 00:23:41.692 ] 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "subsystem": "sock", 00:23:41.692 "config": [ 00:23:41.692 { 00:23:41.692 "method": "sock_set_default_impl", 00:23:41.692 "params": { 00:23:41.692 "impl_name": "posix" 00:23:41.692 } 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "method": "sock_impl_set_options", 00:23:41.692 "params": { 00:23:41.692 "impl_name": "ssl", 00:23:41.692 "recv_buf_size": 4096, 00:23:41.692 "send_buf_size": 4096, 00:23:41.692 "enable_recv_pipe": true, 00:23:41.692 "enable_quickack": false, 00:23:41.692 "enable_placement_id": 0, 00:23:41.692 "enable_zerocopy_send_server": true, 00:23:41.692 "enable_zerocopy_send_client": false, 00:23:41.692 "zerocopy_threshold": 0, 00:23:41.692 "tls_version": 0, 00:23:41.692 "enable_ktls": false 00:23:41.692 } 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "method": "sock_impl_set_options", 00:23:41.692 "params": { 00:23:41.692 "impl_name": "posix", 00:23:41.692 "recv_buf_size": 2097152, 00:23:41.692 "send_buf_size": 2097152, 00:23:41.692 "enable_recv_pipe": true, 00:23:41.692 "enable_quickack": false, 00:23:41.692 "enable_placement_id": 0, 00:23:41.692 "enable_zerocopy_send_server": true, 00:23:41.692 "enable_zerocopy_send_client": false, 00:23:41.692 "zerocopy_threshold": 0, 00:23:41.692 "tls_version": 0, 00:23:41.692 "enable_ktls": false 00:23:41.692 } 00:23:41.692 } 00:23:41.692 ] 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "subsystem": "vmd", 00:23:41.692 "config": [] 00:23:41.692 }, 00:23:41.692 { 00:23:41.692 "subsystem": "accel", 00:23:41.692 "config": [ 00:23:41.692 { 00:23:41.692 "method": "accel_set_options", 00:23:41.692 "params": { 00:23:41.692 "small_cache_size": 128, 00:23:41.692 "large_cache_size": 16, 00:23:41.692 "task_count": 2048, 00:23:41.693 "sequence_count": 2048, 00:23:41.693 "buf_count": 2048 00:23:41.693 } 00:23:41.693 } 00:23:41.693 ] 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "subsystem": "bdev", 00:23:41.693 "config": [ 00:23:41.693 { 00:23:41.693 "method": "bdev_set_options", 00:23:41.693 "params": { 00:23:41.693 "bdev_io_pool_size": 65535, 00:23:41.693 "bdev_io_cache_size": 256, 00:23:41.693 "bdev_auto_examine": true, 00:23:41.693 "iobuf_small_cache_size": 128, 00:23:41.693 "iobuf_large_cache_size": 16 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_raid_set_options", 00:23:41.693 "params": { 00:23:41.693 "process_window_size_kb": 1024 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_iscsi_set_options", 00:23:41.693 "params": { 00:23:41.693 "timeout_sec": 30 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_nvme_set_options", 00:23:41.693 "params": { 00:23:41.693 "action_on_timeout": "none", 00:23:41.693 "timeout_us": 0, 00:23:41.693 "timeout_admin_us": 0, 00:23:41.693 "keep_alive_timeout_ms": 10000, 00:23:41.693 "arbitration_burst": 0, 00:23:41.693 "low_priority_weight": 0, 00:23:41.693 "medium_priority_weight": 0, 00:23:41.693 "high_priority_weight": 0, 00:23:41.693 "nvme_adminq_poll_period_us": 10000, 00:23:41.693 "nvme_ioq_poll_period_us": 0, 00:23:41.693 "io_queue_requests": 512, 00:23:41.693 "delay_cmd_submit": true, 00:23:41.693 "transport_retry_count": 4, 00:23:41.693 "bdev_retry_count": 3, 00:23:41.693 "transport_ack_timeout": 0, 00:23:41.693 "ctrlr_loss_timeout_sec": 0, 00:23:41.693 "reconnect_delay_sec": 0, 00:23:41.693 "fast_io_fail_timeout_sec": 0, 00:23:41.693 "disable_auto_failback": false, 00:23:41.693 "generate_uuids": false, 00:23:41.693 "transport_tos": 0, 00:23:41.693 "nvme_error_stat": false, 00:23:41.693 "rdma_srq_size": 0, 00:23:41.693 "io_path_stat": false, 00:23:41.693 "allow_accel_sequence": false, 00:23:41.693 "rdma_max_cq_size": 0, 00:23:41.693 "rdma_cm_event_timeout_ms": 0, 00:23:41.693 "dhchap_digests": [ 00:23:41.693 "sha256", 00:23:41.693 "sha384", 00:23:41.693 "sha512" 00:23:41.693 ], 00:23:41.693 "dhchap_dhgroups": [ 00:23:41.693 "null", 00:23:41.693 "ffdhe2048", 00:23:41.693 "ffdhe3072", 00:23:41.693 "ffdhe4096", 00:23:41.693 "ffdhe6144", 00:23:41.693 "ffdhe8192" 00:23:41.693 ] 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_nvme_attach_controller", 00:23:41.693 "params": { 00:23:41.693 "name": "nvme0", 00:23:41.693 "trtype": "TCP", 00:23:41.693 "adrfam": "IPv4", 00:23:41.693 "traddr": "10.0.0.2", 00:23:41.693 "trsvcid": "4420", 00:23:41.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.693 "prchk_reftag": false, 00:23:41.693 "prchk_guard": false, 00:23:41.693 "ctrlr_loss_timeout_sec": 0, 00:23:41.693 "reconnect_delay_sec": 0, 00:23:41.693 "fast_io_fail_timeout_sec": 0, 00:23:41.693 "psk": "key0", 00:23:41.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.693 "hdgst": false, 00:23:41.693 "ddgst": false 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_nvme_set_hotplug", 00:23:41.693 "params": { 00:23:41.693 "period_us": 100000, 00:23:41.693 "enable": false 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_enable_histogram", 00:23:41.693 "params": { 00:23:41.693 "name": "nvme0n1", 00:23:41.693 "enable": true 00:23:41.693 } 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "method": "bdev_wait_for_examine" 00:23:41.693 } 00:23:41.693 ] 00:23:41.693 }, 00:23:41.693 { 00:23:41.693 "subsystem": "nbd", 00:23:41.693 "config": [] 00:23:41.693 } 00:23:41.693 ] 00:23:41.693 }' 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1297327 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1297327 ']' 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1297327 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:41.693 00:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297327 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297327' 00:23:41.693 killing process with pid 1297327 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1297327 00:23:41.693 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.693 00:23:41.693 Latency(us) 00:23:41.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.693 =================================================================================================================== 00:23:41.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1297327 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1297301 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1297301 ']' 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1297301 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297301 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297301' 00:23:41.693 killing process with pid 1297301 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1297301 00:23:41.693 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1297301 00:23:41.952 00:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:41.952 00:07:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.952 00:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:41.952 "subsystems": [ 00:23:41.952 { 00:23:41.952 "subsystem": "keyring", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "keyring_file_add_key", 00:23:41.952 "params": { 00:23:41.952 "name": "key0", 00:23:41.952 "path": "/tmp/tmp.QwikHWGTmN" 00:23:41.952 } 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "iobuf", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "iobuf_set_options", 00:23:41.952 "params": { 00:23:41.952 "small_pool_count": 8192, 00:23:41.952 "large_pool_count": 1024, 00:23:41.952 "small_bufsize": 8192, 00:23:41.952 "large_bufsize": 135168 00:23:41.952 } 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "sock", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "sock_set_default_impl", 00:23:41.952 "params": { 00:23:41.952 "impl_name": "posix" 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "sock_impl_set_options", 00:23:41.952 "params": { 00:23:41.952 "impl_name": "ssl", 00:23:41.952 "recv_buf_size": 4096, 00:23:41.952 "send_buf_size": 4096, 00:23:41.952 "enable_recv_pipe": true, 00:23:41.952 "enable_quickack": false, 00:23:41.952 "enable_placement_id": 0, 00:23:41.952 "enable_zerocopy_send_server": true, 00:23:41.952 "enable_zerocopy_send_client": false, 00:23:41.952 "zerocopy_threshold": 0, 00:23:41.952 "tls_version": 0, 00:23:41.952 "enable_ktls": false 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "sock_impl_set_options", 00:23:41.952 "params": { 00:23:41.952 "impl_name": "posix", 00:23:41.952 "recv_buf_size": 2097152, 00:23:41.952 "send_buf_size": 2097152, 00:23:41.952 "enable_recv_pipe": true, 00:23:41.952 "enable_quickack": false, 00:23:41.952 "enable_placement_id": 0, 00:23:41.952 "enable_zerocopy_send_server": true, 00:23:41.952 "enable_zerocopy_send_client": false, 00:23:41.952 "zerocopy_threshold": 0, 00:23:41.952 "tls_version": 0, 00:23:41.952 "enable_ktls": false 00:23:41.952 } 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "vmd", 00:23:41.952 "config": [] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "accel", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "accel_set_options", 00:23:41.952 "params": { 00:23:41.952 "small_cache_size": 128, 00:23:41.952 "large_cache_size": 16, 00:23:41.952 "task_count": 2048, 00:23:41.952 "sequence_count": 2048, 00:23:41.952 "buf_count": 2048 00:23:41.952 } 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "bdev", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "bdev_set_options", 00:23:41.952 "params": { 00:23:41.952 "bdev_io_pool_size": 65535, 00:23:41.952 "bdev_io_cache_size": 256, 00:23:41.952 "bdev_auto_examine": true, 00:23:41.952 "iobuf_small_cache_size": 128, 00:23:41.952 "iobuf_large_cache_size": 16 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_raid_set_options", 00:23:41.952 "params": { 00:23:41.952 "process_window_size_kb": 1024 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_iscsi_set_options", 00:23:41.952 "params": { 00:23:41.952 "timeout_sec": 30 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_nvme_set_options", 00:23:41.952 "params": { 00:23:41.952 "action_on_timeout": "none", 00:23:41.952 "timeout_us": 0, 00:23:41.952 "timeout_admin_us": 0, 00:23:41.952 "keep_alive_timeout_ms": 10000, 00:23:41.952 "arbitration_burst": 0, 00:23:41.952 "low_priority_weight": 0, 00:23:41.952 "medium_priority_weight": 0, 00:23:41.952 "high_priority_weight": 0, 00:23:41.952 "nvme_adminq_poll_period_us": 10000, 00:23:41.952 "nvme_ioq_poll_period_us": 0, 00:23:41.952 "io_queue_requests": 0, 00:23:41.952 "delay_cmd_submit": true, 00:23:41.952 "transport_retry_count": 4, 00:23:41.952 "bdev_retry_count": 3, 00:23:41.952 "transport_ack_timeout": 0, 00:23:41.952 "ctrlr_loss_timeout_sec": 0, 00:23:41.952 "reconnect_delay_sec": 0, 00:23:41.952 "fast_io_fail_timeout_sec": 0, 00:23:41.952 "disable_auto_failback": false, 00:23:41.952 "generate_uuids": false, 00:23:41.952 "transport_tos": 0, 00:23:41.952 "nvme_error_stat": false, 00:23:41.952 "rdma_srq_size": 0, 00:23:41.952 "io_path_stat": false, 00:23:41.952 "allow_accel_sequence": false, 00:23:41.952 "rdma_max_cq_size": 0, 00:23:41.952 "rdma_cm_event_timeout_ms": 0, 00:23:41.952 "dhchap_digests": [ 00:23:41.952 "sha256", 00:23:41.952 "sha384", 00:23:41.952 "sha512" 00:23:41.952 ], 00:23:41.952 "dhchap_dhgroups": [ 00:23:41.952 "null", 00:23:41.952 "ffdhe2048", 00:23:41.952 "ffdhe3072", 00:23:41.952 "ffdhe4096", 00:23:41.952 "ffdhe6144", 00:23:41.952 "ffdhe8192" 00:23:41.952 ] 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_nvme_set_hotplug", 00:23:41.952 "params": { 00:23:41.952 "period_us": 100000, 00:23:41.952 "enable": false 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_malloc_create", 00:23:41.952 "params": { 00:23:41.952 "name": "malloc0", 00:23:41.952 "num_blocks": 8192, 00:23:41.952 "block_size": 4096, 00:23:41.952 "physical_block_size": 4096, 00:23:41.952 "uuid": "49c6d3f9-3524-4929-b6df-2d7d2175ea92", 00:23:41.952 "optimal_io_boundary": 0 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "bdev_wait_for_examine" 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "nbd", 00:23:41.952 "config": [] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "scheduler", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "framework_set_scheduler", 00:23:41.952 "params": { 00:23:41.952 "name": "static" 00:23:41.952 } 00:23:41.952 } 00:23:41.952 ] 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "subsystem": "nvmf", 00:23:41.952 "config": [ 00:23:41.952 { 00:23:41.952 "method": "nvmf_set_config", 00:23:41.952 "params": { 00:23:41.952 "discovery_filter": "match_any", 00:23:41.952 "admin_cmd_passthru": { 00:23:41.952 "identify_ctrlr": false 00:23:41.952 } 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "nvmf_set_max_subsystems", 00:23:41.952 "params": { 00:23:41.952 "max_subsystems": 1024 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "nvmf_set_crdt", 00:23:41.952 "params": { 00:23:41.952 "crdt1": 0, 00:23:41.952 "crdt2": 0, 00:23:41.952 "crdt3": 0 00:23:41.952 } 00:23:41.952 }, 00:23:41.952 { 00:23:41.952 "method": "nvmf_create_transport", 00:23:41.952 "params": { 00:23:41.952 "trtype": "TCP", 00:23:41.952 "max_queue_depth": 128, 00:23:41.952 "max_io_qpairs_per_ctrlr": 127, 00:23:41.952 "in_capsule_data_size": 4096, 00:23:41.952 "max_io_size": 131072, 00:23:41.953 "io_unit_size": 131072, 00:23:41.953 "max_aq_depth": 128, 00:23:41.953 "num_shared_buffers": 511, 00:23:41.953 "buf_cache_size": 4294967295, 00:23:41.953 "dif_insert_or_strip": false, 00:23:41.953 "zcopy": false, 00:23:41.953 "c2h_success": false, 00:23:41.953 "sock_priority": 0, 00:23:41.953 "abort_timeout_sec": 1, 00:23:41.953 "ack_timeout": 0, 00:23:41.953 "data_wr_pool_size": 0 00:23:41.953 } 00:23:41.953 }, 00:23:41.953 { 00:23:41.953 "method": "nvmf_create_subsystem", 00:23:41.953 "params": { 00:23:41.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.953 "allow_any_host": false, 00:23:41.953 "serial_number": "00000000000000000000", 00:23:41.953 "model_number": "SPDK bdev Controller", 00:23:41.953 "max_namespaces": 32, 00:23:41.953 "min_cntlid": 1, 00:23:41.953 "max_cntlid": 65519, 00:23:41.953 "ana_reporting": false 00:23:41.953 } 00:23:41.953 }, 00:23:41.953 { 00:23:41.953 "method": "nvmf_subsystem_add_host", 00:23:41.953 "params": { 00:23:41.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.953 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.953 "psk": "key0" 00:23:41.953 } 00:23:41.953 }, 00:23:41.953 { 00:23:41.953 "method": "nvmf_subsystem_add_ns", 00:23:41.953 "params": { 00:23:41.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.953 "namespace": { 00:23:41.953 "nsid": 1, 00:23:41.953 "bdev_name": "malloc0", 00:23:41.953 "nguid": "49C6D3F935244929B6DF2D7D2175EA92", 00:23:41.953 "uuid": "49c6d3f9-3524-4929-b6df-2d7d2175ea92", 00:23:41.953 "no_auto_visible": false 00:23:41.953 } 00:23:41.953 } 00:23:41.953 }, 00:23:41.953 { 00:23:41.953 "method": "nvmf_subsystem_add_listener", 00:23:41.953 "params": { 00:23:41.953 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.953 "listen_address": { 00:23:41.953 "trtype": "TCP", 00:23:41.953 "adrfam": "IPv4", 00:23:41.953 "traddr": "10.0.0.2", 00:23:41.953 "trsvcid": "4420" 00:23:41.953 }, 00:23:41.953 "secure_channel": true 00:23:41.953 } 00:23:41.953 } 00:23:41.953 ] 00:23:41.953 } 00:23:41.953 ] 00:23:41.953 }' 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1297627 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1297627 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1297627 ']' 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.953 00:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.953 [2024-07-16 00:07:16.429690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:41.953 [2024-07-16 00:07:16.429792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.953 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.209 [2024-07-16 00:07:16.494835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.209 [2024-07-16 00:07:16.583936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.209 [2024-07-16 00:07:16.583998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.209 [2024-07-16 00:07:16.584014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.209 [2024-07-16 00:07:16.584027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.209 [2024-07-16 00:07:16.584039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.209 [2024-07-16 00:07:16.584127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.466 [2024-07-16 00:07:16.812369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.466 [2024-07-16 00:07:16.844361] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.466 [2024-07-16 00:07:16.855330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1297742 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1297742 /var/tmp/bdevperf.sock 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1297742 ']' 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.033 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:43.033 "subsystems": [ 00:23:43.033 { 00:23:43.033 "subsystem": "keyring", 00:23:43.033 "config": [ 00:23:43.033 { 00:23:43.033 "method": "keyring_file_add_key", 00:23:43.033 "params": { 00:23:43.033 "name": "key0", 00:23:43.033 "path": "/tmp/tmp.QwikHWGTmN" 00:23:43.033 } 00:23:43.033 } 00:23:43.033 ] 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "subsystem": "iobuf", 00:23:43.033 "config": [ 00:23:43.033 { 00:23:43.033 "method": "iobuf_set_options", 00:23:43.033 "params": { 00:23:43.033 "small_pool_count": 8192, 00:23:43.033 "large_pool_count": 1024, 00:23:43.033 "small_bufsize": 8192, 00:23:43.033 "large_bufsize": 135168 00:23:43.033 } 00:23:43.033 } 00:23:43.033 ] 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "subsystem": "sock", 00:23:43.033 "config": [ 00:23:43.033 { 00:23:43.033 "method": "sock_set_default_impl", 00:23:43.033 "params": { 00:23:43.033 "impl_name": "posix" 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "sock_impl_set_options", 00:23:43.033 "params": { 00:23:43.033 "impl_name": "ssl", 00:23:43.033 "recv_buf_size": 4096, 00:23:43.033 "send_buf_size": 4096, 00:23:43.033 "enable_recv_pipe": true, 00:23:43.033 "enable_quickack": false, 00:23:43.033 "enable_placement_id": 0, 00:23:43.033 "enable_zerocopy_send_server": true, 00:23:43.033 "enable_zerocopy_send_client": false, 00:23:43.033 "zerocopy_threshold": 0, 00:23:43.033 "tls_version": 0, 00:23:43.033 "enable_ktls": false 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "sock_impl_set_options", 00:23:43.033 "params": { 00:23:43.033 "impl_name": "posix", 00:23:43.033 "recv_buf_size": 2097152, 00:23:43.033 "send_buf_size": 2097152, 00:23:43.033 "enable_recv_pipe": true, 00:23:43.033 "enable_quickack": false, 00:23:43.033 "enable_placement_id": 0, 00:23:43.033 "enable_zerocopy_send_server": true, 00:23:43.033 "enable_zerocopy_send_client": false, 00:23:43.033 "zerocopy_threshold": 0, 00:23:43.033 "tls_version": 0, 00:23:43.033 "enable_ktls": false 00:23:43.033 } 00:23:43.033 } 00:23:43.033 ] 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "subsystem": "vmd", 00:23:43.033 "config": [] 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "subsystem": "accel", 00:23:43.033 "config": [ 00:23:43.033 { 00:23:43.033 "method": "accel_set_options", 00:23:43.033 "params": { 00:23:43.033 "small_cache_size": 128, 00:23:43.033 "large_cache_size": 16, 00:23:43.033 "task_count": 2048, 00:23:43.033 "sequence_count": 2048, 00:23:43.033 "buf_count": 2048 00:23:43.033 } 00:23:43.033 } 00:23:43.033 ] 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "subsystem": "bdev", 00:23:43.033 "config": [ 00:23:43.033 { 00:23:43.033 "method": "bdev_set_options", 00:23:43.033 "params": { 00:23:43.033 "bdev_io_pool_size": 65535, 00:23:43.033 "bdev_io_cache_size": 256, 00:23:43.033 "bdev_auto_examine": true, 00:23:43.033 "iobuf_small_cache_size": 128, 00:23:43.033 "iobuf_large_cache_size": 16 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "bdev_raid_set_options", 00:23:43.033 "params": { 00:23:43.033 "process_window_size_kb": 1024 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "bdev_iscsi_set_options", 00:23:43.033 "params": { 00:23:43.033 "timeout_sec": 30 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "bdev_nvme_set_options", 00:23:43.033 "params": { 00:23:43.033 "action_on_timeout": "none", 00:23:43.033 "timeout_us": 0, 00:23:43.033 "timeout_admin_us": 0, 00:23:43.033 "keep_alive_timeout_ms": 10000, 00:23:43.033 "arbitration_burst": 0, 00:23:43.033 "low_priority_weight": 0, 00:23:43.033 "medium_priority_weight": 0, 00:23:43.033 "high_priority_weight": 0, 00:23:43.033 "nvme_adminq_poll_period_us": 10000, 00:23:43.033 "nvme_ioq_poll_period_us": 0, 00:23:43.033 "io_queue_requests": 512, 00:23:43.033 "delay_cmd_submit": true, 00:23:43.033 "transport_retry_count": 4, 00:23:43.033 "bdev_retry_count": 3, 00:23:43.033 "transport_ack_timeout": 0, 00:23:43.033 "ctrlr_loss_timeout_sec": 0, 00:23:43.033 "reconnect_delay_sec": 0, 00:23:43.033 "fast_io_fail_timeout_sec": 0, 00:23:43.033 "disable_auto_failback": false, 00:23:43.033 "generate_uuids": false, 00:23:43.033 "transport_tos": 0, 00:23:43.033 "nvme_error_stat": false, 00:23:43.033 "rdma_srq_size": 0, 00:23:43.033 "io_path_stat": false, 00:23:43.033 "allow_accel_sequence": false, 00:23:43.033 "rdma_max_cq_size": 0, 00:23:43.033 "rdma_cm_event_timeout_ms": 0, 00:23:43.033 "dhchap_digests": [ 00:23:43.033 "sha256", 00:23:43.033 "sha384", 00:23:43.033 "sha512" 00:23:43.033 ], 00:23:43.033 "dhchap_dhgroups": [ 00:23:43.033 "null", 00:23:43.033 "ffdhe2048", 00:23:43.033 "ffdhe3072", 00:23:43.033 "ffdhe4096", 00:23:43.033 "ffdhe6144", 00:23:43.033 "ffdhe8192" 00:23:43.033 ] 00:23:43.033 } 00:23:43.033 }, 00:23:43.033 { 00:23:43.033 "method": "bdev_nvme_attach_controller", 00:23:43.033 "params": { 00:23:43.033 "name": "nvme0", 00:23:43.033 "trtype": "TCP", 00:23:43.033 "adrfam": "IPv4", 00:23:43.033 "traddr": "10.0.0.2", 00:23:43.033 "trsvcid": "4420", 00:23:43.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.033 "prchk_reftag": false, 00:23:43.033 "prchk_guard": false, 00:23:43.033 "ctrlr_loss_timeout_sec": 0, 00:23:43.034 "reconnect_delay_sec": 0, 00:23:43.034 "fast_io_fail_timeout_sec": 0, 00:23:43.034 "psk": "key0", 00:23:43.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.034 "hdgst": false, 00:23:43.034 "ddgst": false 00:23:43.034 } 00:23:43.034 }, 00:23:43.034 { 00:23:43.034 "method": "bdev_nvme_set_hotplug", 00:23:43.034 "params": { 00:23:43.034 "period_us": 100000, 00:23:43.034 "enable": false 00:23:43.034 } 00:23:43.034 }, 00:23:43.034 { 00:23:43.034 "method": "bdev_enable_histogram", 00:23:43.034 "params": { 00:23:43.034 "name": "nvme0n1", 00:23:43.034 "enable": true 00:23:43.034 } 00:23:43.034 }, 00:23:43.034 { 00:23:43.034 "method": "bdev_wait_for_examine" 00:23:43.034 } 00:23:43.034 ] 00:23:43.034 }, 00:23:43.034 { 00:23:43.034 "subsystem": "nbd", 00:23:43.034 "config": [] 00:23:43.034 } 00:23:43.034 ] 00:23:43.034 }' 00:23:43.034 [2024-07-16 00:07:17.535768] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:43.034 [2024-07-16 00:07:17.535872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297742 ] 00:23:43.292 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.292 [2024-07-16 00:07:17.596209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.292 [2024-07-16 00:07:17.687099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.550 [2024-07-16 00:07:17.853505] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.550 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:43.550 00:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:43.550 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.550 00:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:43.807 00:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.807 00:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.065 Running I/O for 1 seconds... 00:23:44.997 00:23:44.997 Latency(us) 00:23:44.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.997 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:44.997 Verification LBA range: start 0x0 length 0x2000 00:23:44.997 nvme0n1 : 1.02 3065.16 11.97 0.00 0.00 41351.99 7573.05 55147.33 00:23:44.997 =================================================================================================================== 00:23:44.997 Total : 3065.16 11.97 0.00 0.00 41351.99 7573.05 55147.33 00:23:44.997 0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:44.997 nvmf_trace.0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1297742 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1297742 ']' 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1297742 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.997 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297742 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297742' 00:23:45.255 killing process with pid 1297742 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1297742 00:23:45.255 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.255 00:23:45.255 Latency(us) 00:23:45.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.255 =================================================================================================================== 00:23:45.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1297742 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.255 rmmod nvme_tcp 00:23:45.255 rmmod nvme_fabrics 00:23:45.255 rmmod nvme_keyring 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1297627 ']' 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1297627 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1297627 ']' 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1297627 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:45.255 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1297627 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1297627' 00:23:45.513 killing process with pid 1297627 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1297627 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1297627 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.513 00:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.099 00:07:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.099 00:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.M6eNNvCF6f /tmp/tmp.FC6DVwpg8n /tmp/tmp.QwikHWGTmN 00:23:48.099 00:23:48.099 real 1m18.763s 00:23:48.099 user 2m8.853s 00:23:48.099 sys 0m24.206s 00:23:48.099 00:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:48.099 00:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.099 ************************************ 00:23:48.099 END TEST nvmf_tls 00:23:48.099 ************************************ 00:23:48.099 00:07:22 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:48.099 00:07:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:48.099 00:07:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:48.099 00:07:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.099 ************************************ 00:23:48.099 START TEST nvmf_fips 00:23:48.099 ************************************ 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:48.099 * Looking for test storage... 00:23:48.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:48.099 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:48.100 Error setting digest 00:23:48.100 00922D1B777F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:48.100 00922D1B777F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.100 00:07:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:49.477 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:49.477 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:49.477 Found net devices under 0000:08:00.0: cvl_0_0 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:49.477 Found net devices under 0000:08:00.1: cvl_0_1 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:23:49.477 00:23:49.477 --- 10.0.0.2 ping statistics --- 00:23:49.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.477 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:23:49.477 00:23:49.477 --- 10.0.0.1 ping statistics --- 00:23:49.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.477 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.477 00:07:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1299444 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1299444 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1299444 ']' 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:49.735 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 [2024-07-16 00:07:24.102338] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:49.735 [2024-07-16 00:07:24.102426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.735 [2024-07-16 00:07:24.166017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.993 [2024-07-16 00:07:24.252027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.993 [2024-07-16 00:07:24.252077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.993 [2024-07-16 00:07:24.252094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.993 [2024-07-16 00:07:24.252116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.993 [2024-07-16 00:07:24.252128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.993 [2024-07-16 00:07:24.252164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:49.993 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.251 [2024-07-16 00:07:24.662032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.251 [2024-07-16 00:07:24.678020] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.251 [2024-07-16 00:07:24.678232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.251 [2024-07-16 00:07:24.708575] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.251 malloc0 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1299473 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1299473 /var/tmp/bdevperf.sock 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1299473 ']' 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.251 00:07:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.508 [2024-07-16 00:07:24.814338] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:50.508 [2024-07-16 00:07:24.814442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299473 ] 00:23:50.508 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.508 [2024-07-16 00:07:24.874681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.509 [2024-07-16 00:07:24.962302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.766 00:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:50.766 00:07:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:50.766 00:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:51.024 [2024-07-16 00:07:25.322805] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.024 [2024-07-16 00:07:25.322934] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:51.024 TLSTESTn1 00:23:51.024 00:07:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.024 Running I/O for 10 seconds... 00:24:03.210 00:24:03.210 Latency(us) 00:24:03.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.210 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.210 Verification LBA range: start 0x0 length 0x2000 00:24:03.210 TLSTESTn1 : 10.02 3108.37 12.14 0.00 0.00 41103.21 7427.41 39030.33 00:24:03.210 =================================================================================================================== 00:24:03.210 Total : 3108.37 12.14 0.00 0.00 41103.21 7427.41 39030.33 00:24:03.210 0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.210 nvmf_trace.0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1299473 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1299473 ']' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1299473 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1299473 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1299473' 00:24:03.210 killing process with pid 1299473 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1299473 00:24:03.210 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.210 00:24:03.210 Latency(us) 00:24:03.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.210 =================================================================================================================== 00:24:03.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.210 [2024-07-16 00:07:35.657287] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1299473 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.210 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.211 rmmod nvme_tcp 00:24:03.211 rmmod nvme_fabrics 00:24:03.211 rmmod nvme_keyring 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1299444 ']' 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1299444 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1299444 ']' 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1299444 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1299444 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1299444' 00:24:03.211 killing process with pid 1299444 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1299444 00:24:03.211 [2024-07-16 00:07:35.898063] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:03.211 00:07:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1299444 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.211 00:07:36 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:03.778 00:24:03.778 real 0m16.083s 00:24:03.778 user 0m18.178s 00:24:03.778 sys 0m6.142s 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:03.778 ************************************ 00:24:03.778 END TEST nvmf_fips 00:24:03.778 ************************************ 00:24:03.778 00:07:38 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:03.778 00:07:38 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:03.778 00:07:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:03.778 00:07:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:03.778 00:07:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:03.778 ************************************ 00:24:03.778 START TEST nvmf_fuzz 00:24:03.778 ************************************ 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:03.778 * Looking for test storage... 00:24:03.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.778 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.779 00:07:38 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:05.680 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:05.680 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:05.680 Found net devices under 0000:08:00.0: cvl_0_0 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:05.680 Found net devices under 0000:08:00.1: cvl_0_1 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.680 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.681 00:07:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:24:05.681 00:24:05.681 --- 10.0.0.2 ping statistics --- 00:24:05.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.681 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:24:05.681 00:24:05.681 --- 10.0.0.1 ping statistics --- 00:24:05.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.681 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1301892 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1301892 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1301892 ']' 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.681 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 Malloc0 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:05.961 00:07:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:38.018 Fuzzing completed. Shutting down the fuzz application 00:24:38.018 00:24:38.018 Dumping successful admin opcodes: 00:24:38.018 8, 9, 10, 24, 00:24:38.018 Dumping successful io opcodes: 00:24:38.018 0, 9, 00:24:38.018 NS: 0x200003aeff00 I/O qp, Total commands completed: 458651, total successful commands: 2659, random_seed: 2599987968 00:24:38.018 NS: 0x200003aeff00 admin qp, Total commands completed: 53612, total successful commands: 431, random_seed: 3894106432 00:24:38.018 00:08:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:38.018 Fuzzing completed. Shutting down the fuzz application 00:24:38.018 00:24:38.018 Dumping successful admin opcodes: 00:24:38.018 24, 00:24:38.018 Dumping successful io opcodes: 00:24:38.018 00:24:38.018 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2469691599 00:24:38.018 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2469826095 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.018 rmmod nvme_tcp 00:24:38.018 rmmod nvme_fabrics 00:24:38.018 rmmod nvme_keyring 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1301892 ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1301892 ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1301892' 00:24:38.018 killing process with pid 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 1301892 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.018 00:08:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.925 00:08:14 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.925 00:08:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:39.925 00:24:39.925 real 0m36.270s 00:24:39.925 user 0m50.650s 00:24:39.925 sys 0m14.194s 00:24:39.925 00:08:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:39.925 00:08:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.925 ************************************ 00:24:39.925 END TEST nvmf_fuzz 00:24:39.925 ************************************ 00:24:40.229 00:08:14 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:40.229 00:08:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:40.229 00:08:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:40.229 00:08:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:40.229 ************************************ 00:24:40.229 START TEST nvmf_multiconnection 00:24:40.229 ************************************ 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:40.229 * Looking for test storage... 00:24:40.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.229 00:08:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.130 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:42.131 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:42.131 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:42.131 Found net devices under 0000:08:00.0: cvl_0_0 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:42.131 Found net devices under 0000:08:00.1: cvl_0_1 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:24:42.131 00:24:42.131 --- 10.0.0.2 ping statistics --- 00:24:42.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.131 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:24:42.131 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:42.131 00:24:42.131 --- 10.0.0.1 ping statistics --- 00:24:42.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.132 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1306099 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1306099 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1306099 ']' 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.132 [2024-07-16 00:08:16.332487] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:42.132 [2024-07-16 00:08:16.332588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.132 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.132 [2024-07-16 00:08:16.397740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.132 [2024-07-16 00:08:16.486750] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.132 [2024-07-16 00:08:16.486805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.132 [2024-07-16 00:08:16.486822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.132 [2024-07-16 00:08:16.486835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.132 [2024-07-16 00:08:16.486847] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.132 [2024-07-16 00:08:16.486933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.132 [2024-07-16 00:08:16.486986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.132 [2024-07-16 00:08:16.487038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.132 [2024-07-16 00:08:16.487040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.132 [2024-07-16 00:08:16.616616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.132 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 Malloc1 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 [2024-07-16 00:08:16.669719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.390 Malloc2 00:24:42.390 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 Malloc3 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 Malloc4 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 Malloc5 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 Malloc6 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.391 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 Malloc7 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 Malloc8 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 Malloc9 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 Malloc10 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 Malloc11 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.649 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:43.214 00:08:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:43.214 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:43.214 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.214 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:43.214 00:08:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.744 00:08:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:45.744 00:08:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:45.744 00:08:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:45.744 00:08:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.744 00:08:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:45.744 00:08:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.638 00:08:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:48.234 00:08:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:48.234 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:48.234 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:48.234 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:48.234 00:08:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:50.127 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:50.127 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:50.127 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:50.384 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:50.384 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.384 00:08:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:50.384 00:08:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:50.385 00:08:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:50.642 00:08:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:50.643 00:08:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.643 00:08:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.643 00:08:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.643 00:08:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:53.176 00:08:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:55.702 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:55.702 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:55.702 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:55.702 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:55.703 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.703 00:08:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:55.703 00:08:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.703 00:08:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:55.960 00:08:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:55.960 00:08:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.960 00:08:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.960 00:08:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.960 00:08:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.857 00:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.858 00:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:58.792 00:08:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:58.792 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.792 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.792 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.792 00:08:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.717 00:08:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:01.281 00:08:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:01.281 00:08:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.281 00:08:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.281 00:08:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.281 00:08:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.178 00:08:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:04.112 00:08:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:04.112 00:08:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:04.112 00:08:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.112 00:08:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:04.112 00:08:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.010 00:08:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:06.576 00:08:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:06.576 00:08:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.576 00:08:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.576 00:08:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.576 00:08:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:09.099 00:08:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:09.355 00:08:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:09.355 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.355 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.355 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.355 00:08:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:11.254 00:08:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:11.254 [global] 00:25:11.254 thread=1 00:25:11.254 invalidate=1 00:25:11.254 rw=read 00:25:11.254 time_based=1 00:25:11.254 runtime=10 00:25:11.254 ioengine=libaio 00:25:11.254 direct=1 00:25:11.254 bs=262144 00:25:11.254 iodepth=64 00:25:11.254 norandommap=1 00:25:11.254 numjobs=1 00:25:11.254 00:25:11.254 [job0] 00:25:11.254 filename=/dev/nvme0n1 00:25:11.254 [job1] 00:25:11.254 filename=/dev/nvme10n1 00:25:11.254 [job2] 00:25:11.254 filename=/dev/nvme1n1 00:25:11.254 [job3] 00:25:11.254 filename=/dev/nvme2n1 00:25:11.254 [job4] 00:25:11.254 filename=/dev/nvme3n1 00:25:11.254 [job5] 00:25:11.254 filename=/dev/nvme4n1 00:25:11.254 [job6] 00:25:11.254 filename=/dev/nvme5n1 00:25:11.254 [job7] 00:25:11.254 filename=/dev/nvme6n1 00:25:11.511 [job8] 00:25:11.511 filename=/dev/nvme7n1 00:25:11.511 [job9] 00:25:11.511 filename=/dev/nvme8n1 00:25:11.511 [job10] 00:25:11.511 filename=/dev/nvme9n1 00:25:11.511 Could not set queue depth (nvme0n1) 00:25:11.511 Could not set queue depth (nvme10n1) 00:25:11.512 Could not set queue depth (nvme1n1) 00:25:11.512 Could not set queue depth (nvme2n1) 00:25:11.512 Could not set queue depth (nvme3n1) 00:25:11.512 Could not set queue depth (nvme4n1) 00:25:11.512 Could not set queue depth (nvme5n1) 00:25:11.512 Could not set queue depth (nvme6n1) 00:25:11.512 Could not set queue depth (nvme7n1) 00:25:11.512 Could not set queue depth (nvme8n1) 00:25:11.512 Could not set queue depth (nvme9n1) 00:25:11.769 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:11.769 fio-3.35 00:25:11.769 Starting 11 threads 00:25:23.969 00:25:23.969 job0: (groupid=0, jobs=1): err= 0: pid=1309214: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=637, BW=159MiB/s (167MB/s)(1606MiB/10077msec) 00:25:23.969 slat (usec): min=12, max=82798, avg=1248.44, stdev=4685.01 00:25:23.969 clat (msec): min=2, max=276, avg=98.99, stdev=43.43 00:25:23.969 lat (msec): min=2, max=276, avg=100.23, stdev=43.98 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 46], 20.00th=[ 64], 00:25:23.969 | 30.00th=[ 75], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 109], 00:25:23.969 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 171], 00:25:23.969 | 99.00th=[ 194], 99.50th=[ 203], 99.90th=[ 213], 99.95th=[ 218], 00:25:23.969 | 99.99th=[ 275] 00:25:23.969 bw ( KiB/s): min=96768, max=260608, per=8.67%, avg=162847.10, stdev=42955.63, samples=20 00:25:23.969 iops : min= 378, max= 1018, avg=636.05, stdev=167.79, samples=20 00:25:23.969 lat (msec) : 4=0.50%, 10=1.60%, 20=2.16%, 50=7.52%, 100=41.35% 00:25:23.969 lat (msec) : 250=46.85%, 500=0.02% 00:25:23.969 cpu : usr=0.43%, sys=2.15%, ctx=1069, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=6425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job1: (groupid=0, jobs=1): err= 0: pid=1309215: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=568, BW=142MiB/s (149MB/s)(1436MiB/10103msec) 00:25:23.969 slat (usec): min=10, max=119661, avg=1167.47, stdev=4950.93 00:25:23.969 clat (msec): min=2, max=351, avg=111.33, stdev=48.99 00:25:23.969 lat (msec): min=3, max=351, avg=112.50, stdev=49.70 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 67], 00:25:23.969 | 30.00th=[ 78], 40.00th=[ 92], 50.00th=[ 112], 60.00th=[ 124], 00:25:23.969 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 178], 95.00th=[ 197], 00:25:23.969 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 284], 99.95th=[ 338], 00:25:23.969 | 99.99th=[ 351] 00:25:23.969 bw ( KiB/s): min=77312, max=213504, per=7.74%, avg=145363.55, stdev=42549.49, samples=20 00:25:23.969 iops : min= 302, max= 834, avg=567.80, stdev=166.22, samples=20 00:25:23.969 lat (msec) : 4=0.17%, 10=1.10%, 20=1.17%, 50=4.02%, 100=37.13% 00:25:23.969 lat (msec) : 250=55.92%, 500=0.49% 00:25:23.969 cpu : usr=0.34%, sys=1.96%, ctx=1115, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job2: (groupid=0, jobs=1): err= 0: pid=1309217: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=657, BW=164MiB/s (172MB/s)(1658MiB/10077msec) 00:25:23.969 slat (usec): min=11, max=122021, avg=1276.21, stdev=5315.10 00:25:23.969 clat (usec): min=1296, max=315161, avg=95859.24, stdev=49945.67 00:25:23.969 lat (usec): min=1348, max=324253, avg=97135.45, stdev=50751.35 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 58], 00:25:23.969 | 30.00th=[ 69], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 99], 00:25:23.969 | 70.00th=[ 112], 80.00th=[ 134], 90.00th=[ 167], 95.00th=[ 192], 00:25:23.969 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 284], 99.95th=[ 292], 00:25:23.969 | 99.99th=[ 317] 00:25:23.969 bw ( KiB/s): min=83968, max=320000, per=8.94%, avg=168052.60, stdev=62178.74, samples=20 00:25:23.969 iops : min= 328, max= 1250, avg=656.40, stdev=242.89, samples=20 00:25:23.969 lat (msec) : 2=0.05%, 4=0.29%, 10=1.13%, 20=2.37%, 50=13.12% 00:25:23.969 lat (msec) : 100=44.52%, 250=37.99%, 500=0.53% 00:25:23.969 cpu : usr=0.38%, sys=2.56%, ctx=1143, majf=0, minf=3974 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job3: (groupid=0, jobs=1): err= 0: pid=1309224: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=611, BW=153MiB/s (160MB/s)(1544MiB/10103msec) 00:25:23.969 slat (usec): min=10, max=97085, avg=997.83, stdev=4494.07 00:25:23.969 clat (usec): min=1002, max=307488, avg=103635.87, stdev=55139.52 00:25:23.969 lat (usec): min=1032, max=342694, avg=104633.71, stdev=55776.38 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 43], 00:25:23.969 | 30.00th=[ 67], 40.00th=[ 93], 50.00th=[ 111], 60.00th=[ 126], 00:25:23.969 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 190], 00:25:23.969 | 99.00th=[ 234], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 275], 00:25:23.969 | 99.99th=[ 309] 00:25:23.969 bw ( KiB/s): min=90624, max=418816, per=8.32%, avg=156373.65, stdev=74228.89, samples=20 00:25:23.969 iops : min= 354, max= 1636, avg=610.75, stdev=289.96, samples=20 00:25:23.969 lat (msec) : 2=0.08%, 4=0.94%, 10=2.24%, 20=2.69%, 50=16.20% 00:25:23.969 lat (msec) : 100=21.90%, 250=55.51%, 500=0.45% 00:25:23.969 cpu : usr=0.31%, sys=1.97%, ctx=1261, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=6174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job4: (groupid=0, jobs=1): err= 0: pid=1309225: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=606, BW=152MiB/s (159MB/s)(1524MiB/10059msec) 00:25:23.969 slat (usec): min=12, max=92986, avg=1349.67, stdev=4936.82 00:25:23.969 clat (msec): min=13, max=282, avg=104.13, stdev=47.27 00:25:23.969 lat (msec): min=13, max=290, avg=105.48, stdev=48.14 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 51], 20.00th=[ 62], 00:25:23.969 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 100], 60.00th=[ 113], 00:25:23.969 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 174], 95.00th=[ 192], 00:25:23.969 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 271], 99.95th=[ 275], 00:25:23.969 | 99.99th=[ 284] 00:25:23.969 bw ( KiB/s): min=83968, max=226304, per=8.22%, avg=154484.35, stdev=44340.02, samples=20 00:25:23.969 iops : min= 328, max= 884, avg=603.35, stdev=173.16, samples=20 00:25:23.969 lat (msec) : 20=0.69%, 50=9.38%, 100=40.27%, 250=48.94%, 500=0.72% 00:25:23.969 cpu : usr=0.31%, sys=2.23%, ctx=992, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=6097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job5: (groupid=0, jobs=1): err= 0: pid=1309226: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=678, BW=170MiB/s (178MB/s)(1715MiB/10112msec) 00:25:23.969 slat (usec): min=11, max=121099, avg=1097.55, stdev=4770.01 00:25:23.969 clat (usec): min=1377, max=263616, avg=93105.89, stdev=54086.58 00:25:23.969 lat (usec): min=1403, max=317244, avg=94203.44, stdev=54881.09 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 31], 00:25:23.969 | 30.00th=[ 55], 40.00th=[ 75], 50.00th=[ 103], 60.00th=[ 117], 00:25:23.969 | 70.00th=[ 130], 80.00th=[ 140], 90.00th=[ 159], 95.00th=[ 178], 00:25:23.969 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 259], 99.95th=[ 259], 00:25:23.969 | 99.99th=[ 264] 00:25:23.969 bw ( KiB/s): min=95041, max=406528, per=9.25%, avg=173902.35, stdev=87706.34, samples=20 00:25:23.969 iops : min= 371, max= 1588, avg=679.25, stdev=342.63, samples=20 00:25:23.969 lat (msec) : 2=0.16%, 4=0.42%, 10=2.25%, 20=6.20%, 50=19.54% 00:25:23.969 lat (msec) : 100=20.75%, 250=50.45%, 500=0.23% 00:25:23.969 cpu : usr=0.45%, sys=2.23%, ctx=1221, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=6858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job6: (groupid=0, jobs=1): err= 0: pid=1309227: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=578, BW=145MiB/s (152MB/s)(1462MiB/10111msec) 00:25:23.969 slat (usec): min=11, max=125500, avg=1138.43, stdev=4744.18 00:25:23.969 clat (msec): min=6, max=333, avg=109.36, stdev=46.33 00:25:23.969 lat (msec): min=6, max=333, avg=110.50, stdev=47.11 00:25:23.969 clat percentiles (msec): 00:25:23.969 | 1.00th=[ 16], 5.00th=[ 39], 10.00th=[ 52], 20.00th=[ 67], 00:25:23.969 | 30.00th=[ 80], 40.00th=[ 92], 50.00th=[ 108], 60.00th=[ 122], 00:25:23.969 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 174], 95.00th=[ 188], 00:25:23.969 | 99.00th=[ 207], 99.50th=[ 222], 99.90th=[ 236], 99.95th=[ 317], 00:25:23.969 | 99.99th=[ 334] 00:25:23.969 bw ( KiB/s): min=87040, max=241152, per=7.88%, avg=148065.35, stdev=37120.53, samples=20 00:25:23.969 iops : min= 340, max= 942, avg=578.30, stdev=144.99, samples=20 00:25:23.969 lat (msec) : 10=0.31%, 20=1.15%, 50=8.09%, 100=35.29%, 250=55.10% 00:25:23.969 lat (msec) : 500=0.07% 00:25:23.969 cpu : usr=0.35%, sys=1.91%, ctx=1151, majf=0, minf=4097 00:25:23.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.969 issued rwts: total=5849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.969 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.969 job7: (groupid=0, jobs=1): err= 0: pid=1309228: Tue Jul 16 00:08:56 2024 00:25:23.969 read: IOPS=696, BW=174MiB/s (183MB/s)(1754MiB/10072msec) 00:25:23.969 slat (usec): min=10, max=134089, avg=1009.84, stdev=4720.13 00:25:23.969 clat (usec): min=1118, max=284580, avg=90755.89, stdev=51153.31 00:25:23.969 lat (usec): min=1144, max=284594, avg=91765.73, stdev=51639.30 00:25:23.970 clat percentiles (msec): 00:25:23.970 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 31], 20.00th=[ 36], 00:25:23.970 | 30.00th=[ 54], 40.00th=[ 73], 50.00th=[ 89], 60.00th=[ 105], 00:25:23.970 | 70.00th=[ 121], 80.00th=[ 140], 90.00th=[ 159], 95.00th=[ 174], 00:25:23.970 | 99.00th=[ 215], 99.50th=[ 259], 99.90th=[ 279], 99.95th=[ 284], 00:25:23.970 | 99.99th=[ 284] 00:25:23.970 bw ( KiB/s): min=102912, max=372502, per=9.47%, avg=177948.55, stdev=75372.69, samples=20 00:25:23.970 iops : min= 402, max= 1455, avg=695.10, stdev=294.42, samples=20 00:25:23.970 lat (msec) : 2=0.10%, 4=1.15%, 10=1.33%, 20=2.18%, 50=23.46% 00:25:23.970 lat (msec) : 100=29.39%, 250=41.68%, 500=0.71% 00:25:23.970 cpu : usr=0.42%, sys=2.19%, ctx=1183, majf=0, minf=4097 00:25:23.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:23.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.970 issued rwts: total=7017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.970 job8: (groupid=0, jobs=1): err= 0: pid=1309229: Tue Jul 16 00:08:56 2024 00:25:23.970 read: IOPS=922, BW=231MiB/s (242MB/s)(2331MiB/10102msec) 00:25:23.970 slat (usec): min=10, max=105575, avg=726.41, stdev=3552.07 00:25:23.970 clat (usec): min=888, max=278460, avg=68558.52, stdev=50293.82 00:25:23.970 lat (usec): min=911, max=278485, avg=69284.93, stdev=50754.91 00:25:23.970 clat percentiles (msec): 00:25:23.970 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 29], 00:25:23.970 | 30.00th=[ 32], 40.00th=[ 36], 50.00th=[ 50], 60.00th=[ 77], 00:25:23.970 | 70.00th=[ 92], 80.00th=[ 109], 90.00th=[ 140], 95.00th=[ 174], 00:25:23.970 | 99.00th=[ 213], 99.50th=[ 228], 99.90th=[ 251], 99.95th=[ 275], 00:25:23.970 | 99.99th=[ 279] 00:25:23.970 bw ( KiB/s): min=89088, max=501760, per=12.62%, avg=237058.40, stdev=122787.34, samples=20 00:25:23.970 iops : min= 348, max= 1960, avg=925.90, stdev=479.68, samples=20 00:25:23.970 lat (usec) : 1000=0.01% 00:25:23.970 lat (msec) : 2=0.23%, 4=1.22%, 10=3.46%, 20=4.71%, 50=40.72% 00:25:23.970 lat (msec) : 100=25.86%, 250=23.69%, 500=0.10% 00:25:23.970 cpu : usr=0.46%, sys=3.08%, ctx=1490, majf=0, minf=4097 00:25:23.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:23.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.970 issued rwts: total=9322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.970 job9: (groupid=0, jobs=1): err= 0: pid=1309230: Tue Jul 16 00:08:56 2024 00:25:23.970 read: IOPS=555, BW=139MiB/s (146MB/s)(1398MiB/10056msec) 00:25:23.970 slat (usec): min=14, max=122930, avg=1370.45, stdev=5265.84 00:25:23.970 clat (msec): min=3, max=267, avg=113.68, stdev=50.02 00:25:23.970 lat (msec): min=3, max=357, avg=115.05, stdev=50.68 00:25:23.970 clat percentiles (msec): 00:25:23.970 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 41], 20.00th=[ 72], 00:25:23.970 | 30.00th=[ 92], 40.00th=[ 105], 50.00th=[ 116], 60.00th=[ 127], 00:25:23.970 | 70.00th=[ 138], 80.00th=[ 155], 90.00th=[ 178], 95.00th=[ 197], 00:25:23.970 | 99.00th=[ 232], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 266], 00:25:23.970 | 99.99th=[ 268] 00:25:23.970 bw ( KiB/s): min=82944, max=302499, per=7.53%, avg=141413.40, stdev=55415.03, samples=20 00:25:23.970 iops : min= 324, max= 1181, avg=552.30, stdev=216.38, samples=20 00:25:23.970 lat (msec) : 4=0.13%, 10=1.13%, 20=2.18%, 50=9.61%, 100=23.31% 00:25:23.970 lat (msec) : 250=63.33%, 500=0.32% 00:25:23.970 cpu : usr=0.22%, sys=2.15%, ctx=1025, majf=0, minf=4097 00:25:23.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:23.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.970 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.970 job10: (groupid=0, jobs=1): err= 0: pid=1309231: Tue Jul 16 00:08:56 2024 00:25:23.970 read: IOPS=842, BW=211MiB/s (221MB/s)(2129MiB/10105msec) 00:25:23.970 slat (usec): min=11, max=66931, avg=906.01, stdev=3636.08 00:25:23.970 clat (usec): min=1620, max=230953, avg=74916.95, stdev=40560.86 00:25:23.970 lat (usec): min=1646, max=231009, avg=75822.96, stdev=40930.53 00:25:23.970 clat percentiles (msec): 00:25:23.970 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 31], 20.00th=[ 40], 00:25:23.970 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 67], 60.00th=[ 79], 00:25:23.970 | 70.00th=[ 91], 80.00th=[ 110], 90.00th=[ 134], 95.00th=[ 155], 00:25:23.970 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 215], 00:25:23.970 | 99.99th=[ 232] 00:25:23.970 bw ( KiB/s): min=97792, max=432640, per=11.52%, avg=216435.60, stdev=90630.84, samples=20 00:25:23.970 iops : min= 382, max= 1690, avg=845.40, stdev=354.04, samples=20 00:25:23.970 lat (msec) : 2=0.01%, 4=0.25%, 10=1.14%, 20=2.74%, 50=26.65% 00:25:23.970 lat (msec) : 100=45.18%, 250=24.03% 00:25:23.970 cpu : usr=0.29%, sys=2.86%, ctx=1202, majf=0, minf=4097 00:25:23.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:23.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:23.970 issued rwts: total=8517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.970 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:23.970 00:25:23.970 Run status group 0 (all jobs): 00:25:23.970 READ: bw=1835MiB/s (1924MB/s), 139MiB/s-231MiB/s (146MB/s-242MB/s), io=18.1GiB (19.5GB), run=10056-10112msec 00:25:23.970 00:25:23.970 Disk stats (read/write): 00:25:23.970 nvme0n1: ios=12624/0, merge=0/0, ticks=1230477/0, in_queue=1230477, util=96.65% 00:25:23.970 nvme10n1: ios=11163/0, merge=0/0, ticks=1234515/0, in_queue=1234515, util=96.98% 00:25:23.970 nvme1n1: ios=13084/0, merge=0/0, ticks=1229949/0, in_queue=1229949, util=97.50% 00:25:23.970 nvme2n1: ios=12017/0, merge=0/0, ticks=1236402/0, in_queue=1236402, util=97.69% 00:25:23.970 nvme3n1: ios=11888/0, merge=0/0, ticks=1233286/0, in_queue=1233286, util=97.82% 00:25:23.970 nvme4n1: ios=13314/0, merge=0/0, ticks=1234951/0, in_queue=1234951, util=98.19% 00:25:23.970 nvme5n1: ios=11409/0, merge=0/0, ticks=1235977/0, in_queue=1235977, util=98.39% 00:25:23.970 nvme6n1: ios=13793/0, merge=0/0, ticks=1232309/0, in_queue=1232309, util=98.42% 00:25:23.970 nvme7n1: ios=18436/0, merge=0/0, ticks=1232690/0, in_queue=1232690, util=98.86% 00:25:23.970 nvme8n1: ios=10891/0, merge=0/0, ticks=1233314/0, in_queue=1233314, util=98.98% 00:25:23.970 nvme9n1: ios=16762/0, merge=0/0, ticks=1234800/0, in_queue=1234800, util=99.17% 00:25:23.970 00:08:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:23.970 [global] 00:25:23.970 thread=1 00:25:23.970 invalidate=1 00:25:23.970 rw=randwrite 00:25:23.970 time_based=1 00:25:23.970 runtime=10 00:25:23.970 ioengine=libaio 00:25:23.970 direct=1 00:25:23.970 bs=262144 00:25:23.970 iodepth=64 00:25:23.970 norandommap=1 00:25:23.970 numjobs=1 00:25:23.970 00:25:23.970 [job0] 00:25:23.970 filename=/dev/nvme0n1 00:25:23.970 [job1] 00:25:23.970 filename=/dev/nvme10n1 00:25:23.970 [job2] 00:25:23.970 filename=/dev/nvme1n1 00:25:23.970 [job3] 00:25:23.970 filename=/dev/nvme2n1 00:25:23.970 [job4] 00:25:23.970 filename=/dev/nvme3n1 00:25:23.970 [job5] 00:25:23.970 filename=/dev/nvme4n1 00:25:23.970 [job6] 00:25:23.970 filename=/dev/nvme5n1 00:25:23.970 [job7] 00:25:23.970 filename=/dev/nvme6n1 00:25:23.970 [job8] 00:25:23.970 filename=/dev/nvme7n1 00:25:23.970 [job9] 00:25:23.970 filename=/dev/nvme8n1 00:25:23.970 [job10] 00:25:23.970 filename=/dev/nvme9n1 00:25:23.970 Could not set queue depth (nvme0n1) 00:25:23.970 Could not set queue depth (nvme10n1) 00:25:23.970 Could not set queue depth (nvme1n1) 00:25:23.970 Could not set queue depth (nvme2n1) 00:25:23.970 Could not set queue depth (nvme3n1) 00:25:23.970 Could not set queue depth (nvme4n1) 00:25:23.970 Could not set queue depth (nvme5n1) 00:25:23.970 Could not set queue depth (nvme6n1) 00:25:23.970 Could not set queue depth (nvme7n1) 00:25:23.970 Could not set queue depth (nvme8n1) 00:25:23.970 Could not set queue depth (nvme9n1) 00:25:23.970 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:23.970 fio-3.35 00:25:23.970 Starting 11 threads 00:25:33.975 00:25:33.975 job0: (groupid=0, jobs=1): err= 0: pid=1310145: Tue Jul 16 00:09:07 2024 00:25:33.975 write: IOPS=460, BW=115MiB/s (121MB/s)(1162MiB/10086msec); 0 zone resets 00:25:33.975 slat (usec): min=19, max=138336, avg=1153.85, stdev=4622.87 00:25:33.975 clat (usec): min=1034, max=409336, avg=137704.14, stdev=75809.58 00:25:33.975 lat (usec): min=1097, max=430758, avg=138857.99, stdev=76748.81 00:25:33.975 clat percentiles (msec): 00:25:33.975 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 39], 20.00th=[ 68], 00:25:33.975 | 30.00th=[ 92], 40.00th=[ 116], 50.00th=[ 136], 60.00th=[ 155], 00:25:33.975 | 70.00th=[ 178], 80.00th=[ 197], 90.00th=[ 236], 95.00th=[ 271], 00:25:33.975 | 99.00th=[ 355], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 405], 00:25:33.975 | 99.99th=[ 409] 00:25:33.975 bw ( KiB/s): min=53248, max=169472, per=8.15%, avg=117350.40, stdev=33744.44, samples=20 00:25:33.975 iops : min= 208, max= 662, avg=458.40, stdev=131.81, samples=20 00:25:33.975 lat (msec) : 2=0.26%, 4=0.11%, 10=0.86%, 20=2.88%, 50=9.98% 00:25:33.975 lat (msec) : 100=19.37%, 250=59.63%, 500=6.91% 00:25:33.975 cpu : usr=1.56%, sys=1.53%, ctx=3318, majf=0, minf=1 00:25:33.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:33.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.975 issued rwts: total=0,4647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.975 job1: (groupid=0, jobs=1): err= 0: pid=1310148: Tue Jul 16 00:09:07 2024 00:25:33.975 write: IOPS=585, BW=146MiB/s (153MB/s)(1479MiB/10101msec); 0 zone resets 00:25:33.975 slat (usec): min=19, max=59841, avg=515.63, stdev=2619.93 00:25:33.975 clat (usec): min=854, max=413656, avg=108721.98, stdev=77592.69 00:25:33.975 lat (usec): min=906, max=428909, avg=109237.61, stdev=78096.07 00:25:33.975 clat percentiles (msec): 00:25:33.975 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 16], 20.00th=[ 35], 00:25:33.975 | 30.00th=[ 52], 40.00th=[ 75], 50.00th=[ 99], 60.00th=[ 124], 00:25:33.975 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 215], 95.00th=[ 251], 00:25:33.975 | 99.00th=[ 321], 99.50th=[ 372], 99.90th=[ 405], 99.95th=[ 409], 00:25:33.975 | 99.99th=[ 414] 00:25:33.975 bw ( KiB/s): min=67072, max=339456, per=10.40%, avg=149785.60, stdev=64409.79, samples=20 00:25:33.975 iops : min= 262, max= 1326, avg=585.10, stdev=251.60, samples=20 00:25:33.975 lat (usec) : 1000=0.12% 00:25:33.975 lat (msec) : 2=0.59%, 4=1.34%, 10=4.45%, 20=5.92%, 50=16.08% 00:25:33.975 lat (msec) : 100=22.18%, 250=44.23%, 500=5.09% 00:25:33.975 cpu : usr=1.58%, sys=2.04%, ctx=4888, majf=0, minf=1 00:25:33.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:33.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.975 issued rwts: total=0,5914,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.975 job2: (groupid=0, jobs=1): err= 0: pid=1310160: Tue Jul 16 00:09:07 2024 00:25:33.975 write: IOPS=437, BW=109MiB/s (115MB/s)(1106MiB/10110msec); 0 zone resets 00:25:33.975 slat (usec): min=21, max=147725, avg=1228.91, stdev=5682.35 00:25:33.975 clat (usec): min=923, max=516149, avg=144921.76, stdev=91129.97 00:25:33.975 lat (usec): min=962, max=516206, avg=146150.67, stdev=92076.14 00:25:33.975 clat percentiles (msec): 00:25:33.975 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 50], 00:25:33.975 | 30.00th=[ 91], 40.00th=[ 121], 50.00th=[ 144], 60.00th=[ 171], 00:25:33.975 | 70.00th=[ 190], 80.00th=[ 209], 90.00th=[ 251], 95.00th=[ 309], 00:25:33.975 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 430], 99.95th=[ 430], 00:25:33.975 | 99.99th=[ 518] 00:25:33.975 bw ( KiB/s): min=61952, max=153600, per=7.76%, avg=111667.20, stdev=25724.14, samples=20 00:25:33.975 iops : min= 242, max= 600, avg=436.20, stdev=100.48, samples=20 00:25:33.975 lat (usec) : 1000=0.05% 00:25:33.975 lat (msec) : 2=0.16%, 4=0.56%, 10=1.22%, 20=5.11%, 50=13.04% 00:25:33.975 lat (msec) : 100=12.47%, 250=57.18%, 500=10.19%, 750=0.02% 00:25:33.975 cpu : usr=1.07%, sys=1.60%, ctx=2952, majf=0, minf=1 00:25:33.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:33.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.975 issued rwts: total=0,4425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.975 job3: (groupid=0, jobs=1): err= 0: pid=1310161: Tue Jul 16 00:09:07 2024 00:25:33.975 write: IOPS=493, BW=123MiB/s (129MB/s)(1247MiB/10116msec); 0 zone resets 00:25:33.975 slat (usec): min=17, max=54008, avg=660.85, stdev=2935.22 00:25:33.975 clat (usec): min=1151, max=340553, avg=129060.85, stdev=77745.00 00:25:33.975 lat (usec): min=1181, max=340630, avg=129721.70, stdev=78390.75 00:25:33.975 clat percentiles (msec): 00:25:33.975 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 55], 00:25:33.975 | 30.00th=[ 85], 40.00th=[ 105], 50.00th=[ 123], 60.00th=[ 144], 00:25:33.975 | 70.00th=[ 167], 80.00th=[ 190], 90.00th=[ 243], 95.00th=[ 275], 00:25:33.975 | 99.00th=[ 321], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:25:33.975 | 99.99th=[ 342] 00:25:33.975 bw ( KiB/s): min=69120, max=235520, per=8.76%, avg=126080.00, stdev=41401.54, samples=20 00:25:33.975 iops : min= 270, max= 920, avg=492.50, stdev=161.72, samples=20 00:25:33.975 lat (msec) : 2=0.22%, 4=1.46%, 10=2.20%, 20=3.27%, 50=11.49% 00:25:33.975 lat (msec) : 100=18.82%, 250=54.04%, 500=8.50% 00:25:33.975 cpu : usr=1.32%, sys=1.94%, ctx=3994, majf=0, minf=1 00:25:33.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:33.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.975 issued rwts: total=0,4989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.975 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.975 job4: (groupid=0, jobs=1): err= 0: pid=1310162: Tue Jul 16 00:09:07 2024 00:25:33.975 write: IOPS=532, BW=133MiB/s (140MB/s)(1343MiB/10088msec); 0 zone resets 00:25:33.976 slat (usec): min=19, max=129889, avg=869.15, stdev=3671.68 00:25:33.976 clat (usec): min=793, max=367332, avg=119267.85, stdev=83408.10 00:25:33.976 lat (usec): min=891, max=367376, avg=120137.00, stdev=84147.60 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 19], 20.00th=[ 46], 00:25:33.976 | 30.00th=[ 55], 40.00th=[ 79], 50.00th=[ 101], 60.00th=[ 136], 00:25:33.976 | 70.00th=[ 174], 80.00th=[ 197], 90.00th=[ 232], 95.00th=[ 262], 00:25:33.976 | 99.00th=[ 342], 99.50th=[ 351], 99.90th=[ 363], 99.95th=[ 368], 00:25:33.976 | 99.99th=[ 368] 00:25:33.976 bw ( KiB/s): min=43008, max=308224, per=9.44%, avg=135910.40, stdev=60867.50, samples=20 00:25:33.976 iops : min= 168, max= 1204, avg=530.90, stdev=237.76, samples=20 00:25:33.976 lat (usec) : 1000=0.19% 00:25:33.976 lat (msec) : 2=0.15%, 4=1.01%, 10=2.89%, 20=6.70%, 50=12.71% 00:25:33.976 lat (msec) : 100=26.38%, 250=43.89%, 500=6.09% 00:25:33.976 cpu : usr=1.50%, sys=1.64%, ctx=3846, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,5372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job5: (groupid=0, jobs=1): err= 0: pid=1310163: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=441, BW=110MiB/s (116MB/s)(1114MiB/10102msec); 0 zone resets 00:25:33.976 slat (usec): min=17, max=95233, avg=1046.71, stdev=4307.47 00:25:33.976 clat (usec): min=967, max=357187, avg=143961.05, stdev=85177.96 00:25:33.976 lat (usec): min=1003, max=357222, avg=145007.76, stdev=86087.12 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 52], 00:25:33.976 | 30.00th=[ 89], 40.00th=[ 120], 50.00th=[ 153], 60.00th=[ 178], 00:25:33.976 | 70.00th=[ 197], 80.00th=[ 222], 90.00th=[ 251], 95.00th=[ 275], 00:25:33.976 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:25:33.976 | 99.99th=[ 359] 00:25:33.976 bw ( KiB/s): min=51200, max=213504, per=7.81%, avg=112486.40, stdev=45211.02, samples=20 00:25:33.976 iops : min= 200, max= 834, avg=439.40, stdev=176.61, samples=20 00:25:33.976 lat (usec) : 1000=0.07% 00:25:33.976 lat (msec) : 2=0.13%, 4=0.81%, 10=3.10%, 20=4.89%, 50=10.43% 00:25:33.976 lat (msec) : 100=14.72%, 250=55.58%, 500=10.28% 00:25:33.976 cpu : usr=1.20%, sys=1.60%, ctx=3421, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,4457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job6: (groupid=0, jobs=1): err= 0: pid=1310164: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=550, BW=138MiB/s (144MB/s)(1392MiB/10116msec); 0 zone resets 00:25:33.976 slat (usec): min=19, max=53176, avg=622.26, stdev=2778.75 00:25:33.976 clat (usec): min=717, max=339112, avg=115624.11, stdev=74455.57 00:25:33.976 lat (usec): min=744, max=339140, avg=116246.37, stdev=75019.56 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 40], 00:25:33.976 | 30.00th=[ 62], 40.00th=[ 86], 50.00th=[ 113], 60.00th=[ 138], 00:25:33.976 | 70.00th=[ 161], 80.00th=[ 188], 90.00th=[ 218], 95.00th=[ 241], 00:25:33.976 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 330], 99.95th=[ 334], 00:25:33.976 | 99.99th=[ 338] 00:25:33.976 bw ( KiB/s): min=76800, max=235008, per=9.79%, avg=140902.40, stdev=41789.86, samples=20 00:25:33.976 iops : min= 300, max= 918, avg=550.40, stdev=163.24, samples=20 00:25:33.976 lat (usec) : 750=0.02%, 1000=0.23% 00:25:33.976 lat (msec) : 2=0.66%, 4=1.01%, 10=3.23%, 20=4.89%, 50=15.93% 00:25:33.976 lat (msec) : 100=19.94%, 250=50.71%, 500=3.38% 00:25:33.976 cpu : usr=1.60%, sys=2.02%, ctx=4442, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,5567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job7: (groupid=0, jobs=1): err= 0: pid=1310165: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=466, BW=117MiB/s (122MB/s)(1183MiB/10132msec); 0 zone resets 00:25:33.976 slat (usec): min=16, max=90005, avg=1208.00, stdev=4474.94 00:25:33.976 clat (usec): min=931, max=356085, avg=135818.40, stdev=85561.08 00:25:33.976 lat (usec): min=963, max=356145, avg=137026.40, stdev=86551.08 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 52], 00:25:33.976 | 30.00th=[ 86], 40.00th=[ 107], 50.00th=[ 121], 60.00th=[ 138], 00:25:33.976 | 70.00th=[ 178], 80.00th=[ 226], 90.00th=[ 262], 95.00th=[ 292], 00:25:33.976 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:25:33.976 | 99.99th=[ 355] 00:25:33.976 bw ( KiB/s): min=53248, max=248320, per=8.30%, avg=119449.60, stdev=48232.84, samples=20 00:25:33.976 iops : min= 208, max= 970, avg=466.60, stdev=188.41, samples=20 00:25:33.976 lat (usec) : 1000=0.04% 00:25:33.976 lat (msec) : 2=0.49%, 4=1.08%, 10=2.47%, 20=2.92%, 50=12.71% 00:25:33.976 lat (msec) : 100=16.30%, 250=51.23%, 500=12.77% 00:25:33.976 cpu : usr=1.13%, sys=1.76%, ctx=3134, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,4730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job8: (groupid=0, jobs=1): err= 0: pid=1310166: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=531, BW=133MiB/s (139MB/s)(1340MiB/10087msec); 0 zone resets 00:25:33.976 slat (usec): min=18, max=74800, avg=805.02, stdev=3254.88 00:25:33.976 clat (usec): min=1372, max=443075, avg=119637.63, stdev=86177.74 00:25:33.976 lat (usec): min=1410, max=449795, avg=120442.65, stdev=87016.12 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 22], 20.00th=[ 38], 00:25:33.976 | 30.00th=[ 60], 40.00th=[ 82], 50.00th=[ 107], 60.00th=[ 131], 00:25:33.976 | 70.00th=[ 161], 80.00th=[ 192], 90.00th=[ 230], 95.00th=[ 292], 00:25:33.976 | 99.00th=[ 368], 99.50th=[ 409], 99.90th=[ 430], 99.95th=[ 439], 00:25:33.976 | 99.99th=[ 443] 00:25:33.976 bw ( KiB/s): min=66048, max=291840, per=9.41%, avg=135552.00, stdev=59247.07, samples=20 00:25:33.976 iops : min= 258, max= 1140, avg=529.50, stdev=231.43, samples=20 00:25:33.976 lat (msec) : 2=0.13%, 4=0.28%, 10=2.63%, 20=6.01%, 50=15.98% 00:25:33.976 lat (msec) : 100=22.14%, 250=45.18%, 500=7.65% 00:25:33.976 cpu : usr=1.41%, sys=1.80%, ctx=3975, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,5358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job9: (groupid=0, jobs=1): err= 0: pid=1310167: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=622, BW=156MiB/s (163MB/s)(1572MiB/10101msec); 0 zone resets 00:25:33.976 slat (usec): min=17, max=56992, avg=519.33, stdev=2669.89 00:25:33.976 clat (usec): min=847, max=348945, avg=102250.14, stdev=81942.62 00:25:33.976 lat (usec): min=911, max=351689, avg=102769.47, stdev=82607.92 00:25:33.976 clat percentiles (msec): 00:25:33.976 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 27], 00:25:33.976 | 30.00th=[ 41], 40.00th=[ 57], 50.00th=[ 82], 60.00th=[ 110], 00:25:33.976 | 70.00th=[ 140], 80.00th=[ 174], 90.00th=[ 230], 95.00th=[ 271], 00:25:33.976 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 342], 99.95th=[ 342], 00:25:33.976 | 99.99th=[ 351] 00:25:33.976 bw ( KiB/s): min=60416, max=309248, per=11.07%, avg=159360.00, stdev=65414.81, samples=20 00:25:33.976 iops : min= 236, max= 1208, avg=622.50, stdev=255.53, samples=20 00:25:33.976 lat (usec) : 1000=0.06% 00:25:33.976 lat (msec) : 2=0.43%, 4=1.48%, 10=5.18%, 20=7.84%, 50=20.80% 00:25:33.976 lat (msec) : 100=20.99%, 250=35.83%, 500=7.38% 00:25:33.976 cpu : usr=1.80%, sys=2.12%, ctx=5280, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,6288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 job10: (groupid=0, jobs=1): err= 0: pid=1310168: Tue Jul 16 00:09:07 2024 00:25:33.976 write: IOPS=520, BW=130MiB/s (136MB/s)(1310MiB/10072msec); 0 zone resets 00:25:33.976 slat (usec): min=17, max=134140, avg=722.12, stdev=3757.05 00:25:33.976 clat (usec): min=822, max=449920, avg=122246.56, stdev=89738.70 00:25:33.976 lat (usec): min=895, max=449949, avg=122968.68, stdev=90557.39 00:25:33.976 clat percentiles (usec): 00:25:33.976 | 1.00th=[ 1565], 5.00th=[ 5735], 10.00th=[ 15795], 20.00th=[ 35914], 00:25:33.976 | 30.00th=[ 52167], 40.00th=[ 78119], 50.00th=[110625], 60.00th=[141558], 00:25:33.976 | 70.00th=[177210], 80.00th=[202376], 90.00th=[242222], 95.00th=[287310], 00:25:33.976 | 99.00th=[346031], 99.50th=[379585], 99.90th=[413139], 99.95th=[442500], 00:25:33.976 | 99.99th=[450888] 00:25:33.976 bw ( KiB/s): min=53248, max=275968, per=9.20%, avg=132531.20, stdev=54495.58, samples=20 00:25:33.976 iops : min= 208, max= 1078, avg=517.70, stdev=212.87, samples=20 00:25:33.976 lat (usec) : 1000=0.36% 00:25:33.976 lat (msec) : 2=1.32%, 4=2.19%, 10=3.03%, 20=5.23%, 50=16.85% 00:25:33.976 lat (msec) : 100=18.36%, 250=43.85%, 500=8.80% 00:25:33.976 cpu : usr=1.29%, sys=1.86%, ctx=4223, majf=0, minf=1 00:25:33.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:33.976 issued rwts: total=0,5240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.976 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:33.976 00:25:33.976 Run status group 0 (all jobs): 00:25:33.976 WRITE: bw=1406MiB/s (1474MB/s), 109MiB/s-156MiB/s (115MB/s-163MB/s), io=13.9GiB (14.9GB), run=10072-10132msec 00:25:33.976 00:25:33.976 Disk stats (read/write): 00:25:33.976 nvme0n1: ios=49/9279, merge=0/0, ticks=126/1248824, in_queue=1248950, util=97.26% 00:25:33.976 nvme10n1: ios=50/11792, merge=0/0, ticks=745/1256051, in_queue=1256796, util=99.67% 00:25:33.976 nvme1n1: ios=46/8773, merge=0/0, ticks=1653/1226027, in_queue=1227680, util=99.98% 00:25:33.976 nvme2n1: ios=41/9902, merge=0/0, ticks=318/1251805, in_queue=1252123, util=98.22% 00:25:33.976 nvme3n1: ios=20/10715, merge=0/0, ticks=228/1251908, in_queue=1252136, util=97.45% 00:25:33.976 nvme4n1: ios=0/8877, merge=0/0, ticks=0/1250565, in_queue=1250565, util=97.90% 00:25:33.976 nvme5n1: ios=45/11082, merge=0/0, ticks=81/1255543, in_queue=1255624, util=98.58% 00:25:33.976 nvme6n1: ios=43/9369, merge=0/0, ticks=3738/1230989, in_queue=1234727, util=100.00% 00:25:33.976 nvme7n1: ios=0/10704, merge=0/0, ticks=0/1253000, in_queue=1253000, util=98.75% 00:25:33.976 nvme8n1: ios=0/12544, merge=0/0, ticks=0/1256354, in_queue=1256354, util=98.94% 00:25:33.976 nvme9n1: ios=39/10478, merge=0/0, ticks=1030/1253946, in_queue=1254976, util=100.00% 00:25:33.976 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:33.976 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:33.976 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.976 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:33.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:33.976 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:33.977 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:33.977 00:09:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.977 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:34.235 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.235 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:34.494 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.494 00:09:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:34.753 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.753 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:35.011 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:35.011 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.011 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:35.269 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:35.269 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.269 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:35.528 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:35.528 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.528 00:09:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.528 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.528 rmmod nvme_tcp 00:25:35.528 rmmod nvme_fabrics 00:25:35.786 rmmod nvme_keyring 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1306099 ']' 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1306099 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1306099 ']' 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1306099 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:35.786 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1306099 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1306099' 00:25:35.787 killing process with pid 1306099 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1306099 00:25:35.787 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1306099 00:25:36.045 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.045 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.045 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.046 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.046 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.046 00:09:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.046 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.046 00:09:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.574 00:09:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:38.574 00:25:38.574 real 0m58.052s 00:25:38.574 user 3m13.050s 00:25:38.574 sys 0m24.938s 00:25:38.574 00:09:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:38.574 00:09:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.574 ************************************ 00:25:38.574 END TEST nvmf_multiconnection 00:25:38.574 ************************************ 00:25:38.574 00:09:12 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:38.574 00:09:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:38.574 00:09:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:38.574 00:09:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:38.574 ************************************ 00:25:38.574 START TEST nvmf_initiator_timeout 00:25:38.574 ************************************ 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:38.574 * Looking for test storage... 00:25:38.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.574 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:38.575 00:09:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:39.951 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:39.951 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:39.951 Found net devices under 0000:08:00.0: cvl_0_0 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:39.951 Found net devices under 0000:08:00.1: cvl_0_1 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:25:39.951 00:25:39.951 --- 10.0.0.2 ping statistics --- 00:25:39.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.951 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:39.951 00:25:39.951 --- 10.0.0.1 ping statistics --- 00:25:39.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.951 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.951 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1313352 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1313352 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1313352 ']' 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:39.952 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.952 [2024-07-16 00:09:14.436846] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:39.952 [2024-07-16 00:09:14.436948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.210 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.210 [2024-07-16 00:09:14.515955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.210 [2024-07-16 00:09:14.603455] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.210 [2024-07-16 00:09:14.603513] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.210 [2024-07-16 00:09:14.603529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.210 [2024-07-16 00:09:14.603545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.210 [2024-07-16 00:09:14.603558] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.210 [2024-07-16 00:09:14.603623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.210 [2024-07-16 00:09:14.603678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.210 [2024-07-16 00:09:14.603726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.210 [2024-07-16 00:09:14.603728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 Malloc0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 Delay0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 [2024-07-16 00:09:14.781146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:40.469 [2024-07-16 00:09:14.809377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.469 00:09:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:41.035 00:09:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:41.035 00:09:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:41.035 00:09:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.035 00:09:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:41.035 00:09:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1313586 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:42.934 00:09:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:42.934 [global] 00:25:42.934 thread=1 00:25:42.934 invalidate=1 00:25:42.934 rw=write 00:25:42.934 time_based=1 00:25:42.934 runtime=60 00:25:42.934 ioengine=libaio 00:25:42.934 direct=1 00:25:42.934 bs=4096 00:25:42.934 iodepth=1 00:25:42.934 norandommap=0 00:25:42.934 numjobs=1 00:25:42.934 00:25:42.934 verify_dump=1 00:25:42.934 verify_backlog=512 00:25:42.934 verify_state_save=0 00:25:42.934 do_verify=1 00:25:42.934 verify=crc32c-intel 00:25:42.934 [job0] 00:25:42.934 filename=/dev/nvme0n1 00:25:42.934 Could not set queue depth (nvme0n1) 00:25:43.192 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:43.192 fio-3.35 00:25:43.192 Starting 1 thread 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.472 true 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.472 true 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.472 true 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.472 true 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.472 00:09:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.000 true 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.000 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.001 true 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.001 true 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.001 true 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:49.001 00:09:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1313586 00:26:45.240 00:26:45.240 job0: (groupid=0, jobs=1): err= 0: pid=1313699: Tue Jul 16 00:10:17 2024 00:26:45.240 read: IOPS=158, BW=635KiB/s (650kB/s)(37.2MiB/60029msec) 00:26:45.240 slat (usec): min=5, max=8731, avg=12.76, stdev=118.45 00:26:45.240 clat (usec): min=216, max=41163k, avg=6062.52, stdev=421864.46 00:26:45.240 lat (usec): min=222, max=41163k, avg=6075.28, stdev=421864.58 00:26:45.240 clat percentiles (usec): 00:26:45.240 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 00:26:45.240 | 20.00th=[ 243], 30.00th=[ 245], 40.00th=[ 249], 00:26:45.240 | 50.00th=[ 253], 60.00th=[ 258], 70.00th=[ 262], 00:26:45.240 | 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 465], 00:26:45.240 | 99.00th=[ 41157], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:45.240 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:45.240 write: IOPS=162, BW=648KiB/s (664kB/s)(38.0MiB/60029msec); 0 zone resets 00:26:45.240 slat (usec): min=6, max=29416, avg=16.82, stdev=298.15 00:26:45.240 clat (usec): min=162, max=3126, avg=200.05, stdev=35.24 00:26:45.240 lat (usec): min=169, max=29755, avg=216.87, stdev=301.80 00:26:45.240 clat percentiles (usec): 00:26:45.240 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:26:45.240 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:26:45.240 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:26:45.240 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 338], 00:26:45.240 | 99.99th=[ 3130] 00:26:45.240 bw ( KiB/s): min= 3752, max= 9128, per=100.00%, avg=7074.91, stdev=2074.98, samples=11 00:26:45.240 iops : min= 938, max= 2282, avg=1768.73, stdev=518.75, samples=11 00:26:45.240 lat (usec) : 250=71.33%, 500=26.79%, 750=0.09% 00:26:45.240 lat (msec) : 4=0.01%, 50=1.79%, >=2000=0.01% 00:26:45.240 cpu : usr=0.24%, sys=0.41%, ctx=19256, majf=0, minf=1 00:26:45.240 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.240 issued rwts: total=9523,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.240 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.240 00:26:45.240 Run status group 0 (all jobs): 00:26:45.240 READ: bw=635KiB/s (650kB/s), 635KiB/s-635KiB/s (650kB/s-650kB/s), io=37.2MiB (39.0MB), run=60029-60029msec 00:26:45.240 WRITE: bw=648KiB/s (664kB/s), 648KiB/s-648KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60029-60029msec 00:26:45.240 00:26:45.240 Disk stats (read/write): 00:26:45.240 nvme0n1: ios=9573/9728, merge=0/0, ticks=17382/1867, in_queue=19249, util=99.60% 00:26:45.240 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:45.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:45.240 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:45.240 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:45.240 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:45.241 nvmf hotplug test: fio successful as expected 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.241 rmmod nvme_tcp 00:26:45.241 rmmod nvme_fabrics 00:26:45.241 rmmod nvme_keyring 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1313352 ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1313352 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1313352 ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1313352 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1313352 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1313352' 00:26:45.241 killing process with pid 1313352 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1313352 00:26:45.241 00:10:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1313352 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.241 00:10:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.810 00:10:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:45.810 00:26:45.810 real 1m7.614s 00:26:45.810 user 4m8.467s 00:26:45.810 sys 0m6.496s 00:26:45.810 00:10:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:45.810 00:10:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.810 ************************************ 00:26:45.810 END TEST nvmf_initiator_timeout 00:26:45.810 ************************************ 00:26:45.810 00:10:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:45.810 00:10:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:45.810 00:10:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:45.810 00:10:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:45.810 00:10:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.717 00:10:21 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:47.718 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:47.718 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:47.718 Found net devices under 0000:08:00.0: cvl_0_0 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:47.718 Found net devices under 0000:08:00.1: cvl_0_1 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:47.718 00:10:21 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:47.718 00:10:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:47.718 00:10:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:47.718 00:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.718 ************************************ 00:26:47.718 START TEST nvmf_perf_adq 00:26:47.718 ************************************ 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:47.718 * Looking for test storage... 00:26:47.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.718 00:10:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.719 00:10:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:49.097 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:49.097 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:49.097 Found net devices under 0000:08:00.0: cvl_0_0 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:49.097 Found net devices under 0000:08:00.1: cvl_0_1 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:49.097 00:10:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:49.666 00:10:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:51.566 00:10:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:56.844 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:56.844 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:56.844 Found net devices under 0000:08:00.0: cvl_0_0 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:56.844 Found net devices under 0000:08:00.1: cvl_0_1 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.844 00:10:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:26:56.844 00:26:56.844 --- 10.0.0.2 ping statistics --- 00:26:56.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.844 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:26:56.844 00:26:56.844 --- 10.0.0.1 ping statistics --- 00:26:56.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.844 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.844 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1322297 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1322297 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1322297 ']' 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:56.845 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:56.845 [2024-07-16 00:10:31.145930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:56.845 [2024-07-16 00:10:31.146037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.845 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.845 [2024-07-16 00:10:31.216667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:56.845 [2024-07-16 00:10:31.307873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.845 [2024-07-16 00:10:31.307931] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.845 [2024-07-16 00:10:31.307948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.845 [2024-07-16 00:10:31.307962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.845 [2024-07-16 00:10:31.307975] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.845 [2024-07-16 00:10:31.308033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.845 [2024-07-16 00:10:31.308060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:56.845 [2024-07-16 00:10:31.308108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:56.845 [2024-07-16 00:10:31.308111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 [2024-07-16 00:10:31.558521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 Malloc1 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.102 [2024-07-16 00:10:31.607207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1322344 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:57.102 00:10:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:57.359 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:59.258 "tick_rate": 2700000000, 00:26:59.258 "poll_groups": [ 00:26:59.258 { 00:26:59.258 "name": "nvmf_tgt_poll_group_000", 00:26:59.258 "admin_qpairs": 1, 00:26:59.258 "io_qpairs": 1, 00:26:59.258 "current_admin_qpairs": 1, 00:26:59.258 "current_io_qpairs": 1, 00:26:59.258 "pending_bdev_io": 0, 00:26:59.258 "completed_nvme_io": 18723, 00:26:59.258 "transports": [ 00:26:59.258 { 00:26:59.258 "trtype": "TCP" 00:26:59.258 } 00:26:59.258 ] 00:26:59.258 }, 00:26:59.258 { 00:26:59.258 "name": "nvmf_tgt_poll_group_001", 00:26:59.258 "admin_qpairs": 0, 00:26:59.258 "io_qpairs": 1, 00:26:59.258 "current_admin_qpairs": 0, 00:26:59.258 "current_io_qpairs": 1, 00:26:59.258 "pending_bdev_io": 0, 00:26:59.258 "completed_nvme_io": 19049, 00:26:59.258 "transports": [ 00:26:59.258 { 00:26:59.258 "trtype": "TCP" 00:26:59.258 } 00:26:59.258 ] 00:26:59.258 }, 00:26:59.258 { 00:26:59.258 "name": "nvmf_tgt_poll_group_002", 00:26:59.258 "admin_qpairs": 0, 00:26:59.258 "io_qpairs": 1, 00:26:59.258 "current_admin_qpairs": 0, 00:26:59.258 "current_io_qpairs": 1, 00:26:59.258 "pending_bdev_io": 0, 00:26:59.258 "completed_nvme_io": 19038, 00:26:59.258 "transports": [ 00:26:59.258 { 00:26:59.258 "trtype": "TCP" 00:26:59.258 } 00:26:59.258 ] 00:26:59.258 }, 00:26:59.258 { 00:26:59.258 "name": "nvmf_tgt_poll_group_003", 00:26:59.258 "admin_qpairs": 0, 00:26:59.258 "io_qpairs": 1, 00:26:59.258 "current_admin_qpairs": 0, 00:26:59.258 "current_io_qpairs": 1, 00:26:59.258 "pending_bdev_io": 0, 00:26:59.258 "completed_nvme_io": 18250, 00:26:59.258 "transports": [ 00:26:59.258 { 00:26:59.258 "trtype": "TCP" 00:26:59.258 } 00:26:59.258 ] 00:26:59.258 } 00:26:59.258 ] 00:26:59.258 }' 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:59.258 00:10:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1322344 00:27:07.406 Initializing NVMe Controllers 00:27:07.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.406 Initialization complete. Launching workers. 00:27:07.406 ======================================================== 00:27:07.406 Latency(us) 00:27:07.406 Device Information : IOPS MiB/s Average min max 00:27:07.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9859.36 38.51 6493.62 2783.04 10388.39 00:27:07.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9927.55 38.78 6448.05 2392.49 11160.92 00:27:07.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9525.06 37.21 6719.64 2882.06 10333.49 00:27:07.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9822.56 38.37 6515.28 2312.68 10308.48 00:27:07.406 ======================================================== 00:27:07.406 Total : 39134.53 152.87 6542.51 2312.68 11160.92 00:27:07.406 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.406 rmmod nvme_tcp 00:27:07.406 rmmod nvme_fabrics 00:27:07.406 rmmod nvme_keyring 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1322297 ']' 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1322297 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1322297 ']' 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1322297 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1322297 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:07.406 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:07.407 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1322297' 00:27:07.407 killing process with pid 1322297 00:27:07.407 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1322297 00:27:07.407 00:10:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1322297 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.667 00:10:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:09.579 00:10:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:09.579 00:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:09.579 00:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:10.146 00:10:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:12.054 00:10:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:17.332 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:17.333 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:17.333 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:17.333 Found net devices under 0000:08:00.0: cvl_0_0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:17.333 Found net devices under 0000:08:00.1: cvl_0_1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:27:17.333 00:27:17.333 --- 10.0.0.2 ping statistics --- 00:27:17.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.333 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:17.333 00:27:17.333 --- 10.0.0.1 ping statistics --- 00:27:17.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.333 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:17.333 net.core.busy_poll = 1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:17.333 net.core.busy_read = 1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1324328 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1324328 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1324328 ']' 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:17.333 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.333 [2024-07-16 00:10:51.742093] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:17.334 [2024-07-16 00:10:51.742202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.334 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.334 [2024-07-16 00:10:51.805788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.593 [2024-07-16 00:10:51.893288] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.593 [2024-07-16 00:10:51.893343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.593 [2024-07-16 00:10:51.893359] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.593 [2024-07-16 00:10:51.893373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.593 [2024-07-16 00:10:51.893385] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.593 [2024-07-16 00:10:51.893713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.593 [2024-07-16 00:10:51.893799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.593 [2024-07-16 00:10:51.893962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.593 [2024-07-16 00:10:51.893965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.593 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:17.593 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:17.593 00:10:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.593 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.593 00:10:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.593 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 [2024-07-16 00:10:52.159690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 Malloc1 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:17.861 [2024-07-16 00:10:52.209773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1324354 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:17.861 00:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:17.861 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.761 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:19.761 00:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.761 00:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:19.761 00:10:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.761 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:19.761 "tick_rate": 2700000000, 00:27:19.761 "poll_groups": [ 00:27:19.761 { 00:27:19.761 "name": "nvmf_tgt_poll_group_000", 00:27:19.761 "admin_qpairs": 1, 00:27:19.761 "io_qpairs": 2, 00:27:19.761 "current_admin_qpairs": 1, 00:27:19.761 "current_io_qpairs": 2, 00:27:19.761 "pending_bdev_io": 0, 00:27:19.761 "completed_nvme_io": 23124, 00:27:19.761 "transports": [ 00:27:19.761 { 00:27:19.761 "trtype": "TCP" 00:27:19.761 } 00:27:19.761 ] 00:27:19.761 }, 00:27:19.761 { 00:27:19.761 "name": "nvmf_tgt_poll_group_001", 00:27:19.761 "admin_qpairs": 0, 00:27:19.761 "io_qpairs": 2, 00:27:19.761 "current_admin_qpairs": 0, 00:27:19.761 "current_io_qpairs": 2, 00:27:19.761 "pending_bdev_io": 0, 00:27:19.762 "completed_nvme_io": 23070, 00:27:19.762 "transports": [ 00:27:19.762 { 00:27:19.762 "trtype": "TCP" 00:27:19.762 } 00:27:19.762 ] 00:27:19.762 }, 00:27:19.762 { 00:27:19.762 "name": "nvmf_tgt_poll_group_002", 00:27:19.762 "admin_qpairs": 0, 00:27:19.762 "io_qpairs": 0, 00:27:19.762 "current_admin_qpairs": 0, 00:27:19.762 "current_io_qpairs": 0, 00:27:19.762 "pending_bdev_io": 0, 00:27:19.762 "completed_nvme_io": 0, 00:27:19.762 "transports": [ 00:27:19.762 { 00:27:19.762 "trtype": "TCP" 00:27:19.762 } 00:27:19.762 ] 00:27:19.762 }, 00:27:19.762 { 00:27:19.762 "name": "nvmf_tgt_poll_group_003", 00:27:19.762 "admin_qpairs": 0, 00:27:19.762 "io_qpairs": 0, 00:27:19.762 "current_admin_qpairs": 0, 00:27:19.762 "current_io_qpairs": 0, 00:27:19.762 "pending_bdev_io": 0, 00:27:19.762 "completed_nvme_io": 0, 00:27:19.762 "transports": [ 00:27:19.762 { 00:27:19.762 "trtype": "TCP" 00:27:19.762 } 00:27:19.762 ] 00:27:19.762 } 00:27:19.762 ] 00:27:19.762 }' 00:27:19.762 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:19.762 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:20.019 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:20.019 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:20.019 00:10:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1324354 00:27:28.125 Initializing NVMe Controllers 00:27:28.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:28.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:28.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:28.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:28.125 Initialization complete. Launching workers. 00:27:28.125 ======================================================== 00:27:28.125 Latency(us) 00:27:28.125 Device Information : IOPS MiB/s Average min max 00:27:28.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6464.40 25.25 9905.15 2060.16 55134.59 00:27:28.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5097.50 19.91 12557.37 2505.82 57118.89 00:27:28.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6978.00 27.26 9173.57 2186.76 55160.03 00:27:28.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5842.10 22.82 11004.65 2102.51 57663.85 00:27:28.125 ======================================================== 00:27:28.125 Total : 24382.00 95.24 10513.72 2060.16 57663.85 00:27:28.125 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.125 rmmod nvme_tcp 00:27:28.125 rmmod nvme_fabrics 00:27:28.125 rmmod nvme_keyring 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1324328 ']' 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1324328 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1324328 ']' 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1324328 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1324328 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1324328' 00:27:28.125 killing process with pid 1324328 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1324328 00:27:28.125 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1324328 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.384 00:11:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.292 00:11:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.292 00:11:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:30.292 00:27:30.292 real 0m42.845s 00:27:30.292 user 2m37.622s 00:27:30.292 sys 0m9.981s 00:27:30.292 00:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:30.292 00:11:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:30.292 ************************************ 00:27:30.292 END TEST nvmf_perf_adq 00:27:30.292 ************************************ 00:27:30.292 00:11:04 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:30.292 00:11:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:30.292 00:11:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.292 00:11:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.292 ************************************ 00:27:30.292 START TEST nvmf_shutdown 00:27:30.292 ************************************ 00:27:30.292 00:11:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:30.292 * Looking for test storage... 00:27:30.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:30.580 ************************************ 00:27:30.580 START TEST nvmf_shutdown_tc1 00:27:30.580 ************************************ 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.580 00:11:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.959 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:31.960 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:31.960 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:31.960 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:32.219 Found net devices under 0000:08:00.0: cvl_0_0 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:32.219 Found net devices under 0000:08:00.1: cvl_0_1 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:27:32.219 00:27:32.219 --- 10.0.0.2 ping statistics --- 00:27:32.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.219 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:27:32.219 00:27:32.219 --- 10.0.0.1 ping statistics --- 00:27:32.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.219 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1326701 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1326701 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1326701 ']' 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.219 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.219 [2024-07-16 00:11:06.672278] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:32.219 [2024-07-16 00:11:06.672367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.219 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.477 [2024-07-16 00:11:06.739855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.477 [2024-07-16 00:11:06.827637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.477 [2024-07-16 00:11:06.827690] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.477 [2024-07-16 00:11:06.827706] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.477 [2024-07-16 00:11:06.827719] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.477 [2024-07-16 00:11:06.827731] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.477 [2024-07-16 00:11:06.827814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.477 [2024-07-16 00:11:06.827869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.477 [2024-07-16 00:11:06.828197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:32.477 [2024-07-16 00:11:06.828231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.477 [2024-07-16 00:11:06.962654] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.477 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.735 00:11:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.735 Malloc1 00:27:32.735 [2024-07-16 00:11:07.035471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.735 Malloc2 00:27:32.735 Malloc3 00:27:32.735 Malloc4 00:27:32.735 Malloc5 00:27:32.735 Malloc6 00:27:32.993 Malloc7 00:27:32.993 Malloc8 00:27:32.993 Malloc9 00:27:32.993 Malloc10 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1326841 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1326841 /var/tmp/bdevperf.sock 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1326841 ']' 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.993 { 00:27:32.993 "params": { 00:27:32.993 "name": "Nvme$subsystem", 00:27:32.993 "trtype": "$TEST_TRANSPORT", 00:27:32.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.993 "adrfam": "ipv4", 00:27:32.993 "trsvcid": "$NVMF_PORT", 00:27:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.993 "hdgst": ${hdgst:-false}, 00:27:32.993 "ddgst": ${ddgst:-false} 00:27:32.993 }, 00:27:32.993 "method": "bdev_nvme_attach_controller" 00:27:32.993 } 00:27:32.993 EOF 00:27:32.993 )") 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.993 { 00:27:32.993 "params": { 00:27:32.993 "name": "Nvme$subsystem", 00:27:32.993 "trtype": "$TEST_TRANSPORT", 00:27:32.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.993 "adrfam": "ipv4", 00:27:32.993 "trsvcid": "$NVMF_PORT", 00:27:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.993 "hdgst": ${hdgst:-false}, 00:27:32.993 "ddgst": ${ddgst:-false} 00:27:32.993 }, 00:27:32.993 "method": "bdev_nvme_attach_controller" 00:27:32.993 } 00:27:32.993 EOF 00:27:32.993 )") 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.993 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.993 { 00:27:32.993 "params": { 00:27:32.993 "name": "Nvme$subsystem", 00:27:32.993 "trtype": "$TEST_TRANSPORT", 00:27:32.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.993 "adrfam": "ipv4", 00:27:32.993 "trsvcid": "$NVMF_PORT", 00:27:32.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.994 "hdgst": ${hdgst:-false}, 00:27:32.994 "ddgst": ${ddgst:-false} 00:27:32.994 }, 00:27:32.994 "method": "bdev_nvme_attach_controller" 00:27:32.994 } 00:27:32.994 EOF 00:27:32.994 )") 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.994 { 00:27:32.994 "params": { 00:27:32.994 "name": "Nvme$subsystem", 00:27:32.994 "trtype": "$TEST_TRANSPORT", 00:27:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.994 "adrfam": "ipv4", 00:27:32.994 "trsvcid": "$NVMF_PORT", 00:27:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.994 "hdgst": ${hdgst:-false}, 00:27:32.994 "ddgst": ${ddgst:-false} 00:27:32.994 }, 00:27:32.994 "method": "bdev_nvme_attach_controller" 00:27:32.994 } 00:27:32.994 EOF 00:27:32.994 )") 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.994 { 00:27:32.994 "params": { 00:27:32.994 "name": "Nvme$subsystem", 00:27:32.994 "trtype": "$TEST_TRANSPORT", 00:27:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.994 "adrfam": "ipv4", 00:27:32.994 "trsvcid": "$NVMF_PORT", 00:27:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.994 "hdgst": ${hdgst:-false}, 00:27:32.994 "ddgst": ${ddgst:-false} 00:27:32.994 }, 00:27:32.994 "method": "bdev_nvme_attach_controller" 00:27:32.994 } 00:27:32.994 EOF 00:27:32.994 )") 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.994 { 00:27:32.994 "params": { 00:27:32.994 "name": "Nvme$subsystem", 00:27:32.994 "trtype": "$TEST_TRANSPORT", 00:27:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.994 "adrfam": "ipv4", 00:27:32.994 "trsvcid": "$NVMF_PORT", 00:27:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.994 "hdgst": ${hdgst:-false}, 00:27:32.994 "ddgst": ${ddgst:-false} 00:27:32.994 }, 00:27:32.994 "method": "bdev_nvme_attach_controller" 00:27:32.994 } 00:27:32.994 EOF 00:27:32.994 )") 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:32.994 { 00:27:32.994 "params": { 00:27:32.994 "name": "Nvme$subsystem", 00:27:32.994 "trtype": "$TEST_TRANSPORT", 00:27:32.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:32.994 "adrfam": "ipv4", 00:27:32.994 "trsvcid": "$NVMF_PORT", 00:27:32.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:32.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:32.994 "hdgst": ${hdgst:-false}, 00:27:32.994 "ddgst": ${ddgst:-false} 00:27:32.994 }, 00:27:32.994 "method": "bdev_nvme_attach_controller" 00:27:32.994 } 00:27:32.994 EOF 00:27:32.994 )") 00:27:32.994 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.252 { 00:27:33.252 "params": { 00:27:33.252 "name": "Nvme$subsystem", 00:27:33.252 "trtype": "$TEST_TRANSPORT", 00:27:33.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.252 "adrfam": "ipv4", 00:27:33.252 "trsvcid": "$NVMF_PORT", 00:27:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.252 "hdgst": ${hdgst:-false}, 00:27:33.252 "ddgst": ${ddgst:-false} 00:27:33.252 }, 00:27:33.252 "method": "bdev_nvme_attach_controller" 00:27:33.252 } 00:27:33.252 EOF 00:27:33.252 )") 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.252 { 00:27:33.252 "params": { 00:27:33.252 "name": "Nvme$subsystem", 00:27:33.252 "trtype": "$TEST_TRANSPORT", 00:27:33.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.252 "adrfam": "ipv4", 00:27:33.252 "trsvcid": "$NVMF_PORT", 00:27:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.252 "hdgst": ${hdgst:-false}, 00:27:33.252 "ddgst": ${ddgst:-false} 00:27:33.252 }, 00:27:33.252 "method": "bdev_nvme_attach_controller" 00:27:33.252 } 00:27:33.252 EOF 00:27:33.252 )") 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.252 { 00:27:33.252 "params": { 00:27:33.252 "name": "Nvme$subsystem", 00:27:33.252 "trtype": "$TEST_TRANSPORT", 00:27:33.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.252 "adrfam": "ipv4", 00:27:33.252 "trsvcid": "$NVMF_PORT", 00:27:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.252 "hdgst": ${hdgst:-false}, 00:27:33.252 "ddgst": ${ddgst:-false} 00:27:33.252 }, 00:27:33.252 "method": "bdev_nvme_attach_controller" 00:27:33.252 } 00:27:33.252 EOF 00:27:33.252 )") 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:33.252 00:11:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:33.252 "params": { 00:27:33.252 "name": "Nvme1", 00:27:33.252 "trtype": "tcp", 00:27:33.252 "traddr": "10.0.0.2", 00:27:33.252 "adrfam": "ipv4", 00:27:33.252 "trsvcid": "4420", 00:27:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.252 "hdgst": false, 00:27:33.252 "ddgst": false 00:27:33.252 }, 00:27:33.252 "method": "bdev_nvme_attach_controller" 00:27:33.252 },{ 00:27:33.252 "params": { 00:27:33.252 "name": "Nvme2", 00:27:33.252 "trtype": "tcp", 00:27:33.252 "traddr": "10.0.0.2", 00:27:33.252 "adrfam": "ipv4", 00:27:33.252 "trsvcid": "4420", 00:27:33.252 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:33.252 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:33.252 "hdgst": false, 00:27:33.252 "ddgst": false 00:27:33.252 }, 00:27:33.252 "method": "bdev_nvme_attach_controller" 00:27:33.252 },{ 00:27:33.252 "params": { 00:27:33.253 "name": "Nvme3", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme4", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme5", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme6", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme7", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme8", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme9", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 },{ 00:27:33.253 "params": { 00:27:33.253 "name": "Nvme10", 00:27:33.253 "trtype": "tcp", 00:27:33.253 "traddr": "10.0.0.2", 00:27:33.253 "adrfam": "ipv4", 00:27:33.253 "trsvcid": "4420", 00:27:33.253 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:33.253 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:33.253 "hdgst": false, 00:27:33.253 "ddgst": false 00:27:33.253 }, 00:27:33.253 "method": "bdev_nvme_attach_controller" 00:27:33.253 }' 00:27:33.253 [2024-07-16 00:11:07.527823] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:33.253 [2024-07-16 00:11:07.527913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:33.253 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.253 [2024-07-16 00:11:07.589296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.253 [2024-07-16 00:11:07.676698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1326841 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:35.150 00:11:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:36.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1326841 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1326701 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.079 { 00:27:36.079 "params": { 00:27:36.079 "name": "Nvme$subsystem", 00:27:36.079 "trtype": "$TEST_TRANSPORT", 00:27:36.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.079 "adrfam": "ipv4", 00:27:36.079 "trsvcid": "$NVMF_PORT", 00:27:36.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.079 "hdgst": ${hdgst:-false}, 00:27:36.079 "ddgst": ${ddgst:-false} 00:27:36.079 }, 00:27:36.079 "method": "bdev_nvme_attach_controller" 00:27:36.079 } 00:27:36.079 EOF 00:27:36.079 )") 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.079 { 00:27:36.079 "params": { 00:27:36.079 "name": "Nvme$subsystem", 00:27:36.079 "trtype": "$TEST_TRANSPORT", 00:27:36.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.079 "adrfam": "ipv4", 00:27:36.079 "trsvcid": "$NVMF_PORT", 00:27:36.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.079 "hdgst": ${hdgst:-false}, 00:27:36.079 "ddgst": ${ddgst:-false} 00:27:36.079 }, 00:27:36.079 "method": "bdev_nvme_attach_controller" 00:27:36.079 } 00:27:36.079 EOF 00:27:36.079 )") 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.079 { 00:27:36.079 "params": { 00:27:36.079 "name": "Nvme$subsystem", 00:27:36.079 "trtype": "$TEST_TRANSPORT", 00:27:36.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.079 "adrfam": "ipv4", 00:27:36.079 "trsvcid": "$NVMF_PORT", 00:27:36.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.079 "hdgst": ${hdgst:-false}, 00:27:36.079 "ddgst": ${ddgst:-false} 00:27:36.079 }, 00:27:36.079 "method": "bdev_nvme_attach_controller" 00:27:36.079 } 00:27:36.079 EOF 00:27:36.079 )") 00:27:36.079 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.338 { 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme$subsystem", 00:27:36.338 "trtype": "$TEST_TRANSPORT", 00:27:36.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "$NVMF_PORT", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.338 "hdgst": ${hdgst:-false}, 00:27:36.338 "ddgst": ${ddgst:-false} 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 } 00:27:36.338 EOF 00:27:36.338 )") 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:36.338 00:11:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme1", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme2", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme3", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme4", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme5", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme6", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme7", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:36.338 "hdgst": false, 00:27:36.338 "ddgst": false 00:27:36.338 }, 00:27:36.338 "method": "bdev_nvme_attach_controller" 00:27:36.338 },{ 00:27:36.338 "params": { 00:27:36.338 "name": "Nvme8", 00:27:36.338 "trtype": "tcp", 00:27:36.338 "traddr": "10.0.0.2", 00:27:36.338 "adrfam": "ipv4", 00:27:36.338 "trsvcid": "4420", 00:27:36.338 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:36.338 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:36.338 "hdgst": false, 00:27:36.339 "ddgst": false 00:27:36.339 }, 00:27:36.339 "method": "bdev_nvme_attach_controller" 00:27:36.339 },{ 00:27:36.339 "params": { 00:27:36.339 "name": "Nvme9", 00:27:36.339 "trtype": "tcp", 00:27:36.339 "traddr": "10.0.0.2", 00:27:36.339 "adrfam": "ipv4", 00:27:36.339 "trsvcid": "4420", 00:27:36.339 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:36.339 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:36.339 "hdgst": false, 00:27:36.339 "ddgst": false 00:27:36.339 }, 00:27:36.339 "method": "bdev_nvme_attach_controller" 00:27:36.339 },{ 00:27:36.339 "params": { 00:27:36.339 "name": "Nvme10", 00:27:36.339 "trtype": "tcp", 00:27:36.339 "traddr": "10.0.0.2", 00:27:36.339 "adrfam": "ipv4", 00:27:36.339 "trsvcid": "4420", 00:27:36.339 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:36.339 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:36.339 "hdgst": false, 00:27:36.339 "ddgst": false 00:27:36.339 }, 00:27:36.339 "method": "bdev_nvme_attach_controller" 00:27:36.339 }' 00:27:36.339 [2024-07-16 00:11:10.628584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:36.339 [2024-07-16 00:11:10.628681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327151 ] 00:27:36.339 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.339 [2024-07-16 00:11:10.692064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.339 [2024-07-16 00:11:10.783168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.236 Running I/O for 1 seconds... 00:27:39.168 00:27:39.168 Latency(us) 00:27:39.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.168 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme1n1 : 1.10 174.20 10.89 0.00 0.00 362997.63 25826.04 302921.96 00:27:39.168 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme2n1 : 1.11 176.18 11.01 0.00 0.00 350357.91 3155.44 302921.96 00:27:39.168 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme3n1 : 1.09 176.23 11.01 0.00 0.00 342491.34 23592.96 306028.85 00:27:39.168 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme4n1 : 1.22 209.36 13.09 0.00 0.00 284943.93 19515.16 290494.39 00:27:39.168 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme5n1 : 1.21 210.97 13.19 0.00 0.00 275733.05 22816.24 299815.06 00:27:39.168 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme6n1 : 1.22 210.47 13.15 0.00 0.00 270109.20 38836.15 287387.50 00:27:39.168 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme7n1 : 1.20 212.74 13.30 0.00 0.00 260688.40 19612.25 306028.85 00:27:39.168 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme8n1 : 1.23 207.84 12.99 0.00 0.00 264151.61 21068.61 310689.19 00:27:39.168 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme9n1 : 1.24 206.78 12.92 0.00 0.00 259551.57 12087.75 333990.87 00:27:39.168 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.168 Verification LBA range: start 0x0 length 0x400 00:27:39.168 Nvme10n1 : 1.24 207.19 12.95 0.00 0.00 254084.93 23592.96 306028.85 00:27:39.168 =================================================================================================================== 00:27:39.169 Total : 1991.95 124.50 0.00 0.00 287797.34 3155.44 333990.87 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.426 rmmod nvme_tcp 00:27:39.426 rmmod nvme_fabrics 00:27:39.426 rmmod nvme_keyring 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1326701 ']' 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1326701 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1326701 ']' 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1326701 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1326701 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1326701' 00:27:39.426 killing process with pid 1326701 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1326701 00:27:39.426 00:11:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1326701 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:39.995 00:11:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.906 00:27:41.906 real 0m11.412s 00:27:41.906 user 0m34.496s 00:27:41.906 sys 0m2.806s 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.906 ************************************ 00:27:41.906 END TEST nvmf_shutdown_tc1 00:27:41.906 ************************************ 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:41.906 ************************************ 00:27:41.906 START TEST nvmf_shutdown_tc2 00:27:41.906 ************************************ 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.906 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:41.907 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:41.907 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:41.907 Found net devices under 0000:08:00.0: cvl_0_0 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:41.907 Found net devices under 0000:08:00.1: cvl_0_1 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:27:41.907 00:27:41.907 --- 10.0.0.2 ping statistics --- 00:27:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.907 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:27:41.907 00:27:41.907 --- 10.0.0.1 ping statistics --- 00:27:41.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.907 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:41.907 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1327729 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1327729 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1327729 ']' 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.167 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.167 [2024-07-16 00:11:16.503824] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:42.167 [2024-07-16 00:11:16.503918] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.167 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.167 [2024-07-16 00:11:16.570117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.167 [2024-07-16 00:11:16.661094] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.167 [2024-07-16 00:11:16.661156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.167 [2024-07-16 00:11:16.661173] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.167 [2024-07-16 00:11:16.661187] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.167 [2024-07-16 00:11:16.661199] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.167 [2024-07-16 00:11:16.661282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.167 [2024-07-16 00:11:16.661342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.167 [2024-07-16 00:11:16.661590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.167 [2024-07-16 00:11:16.661624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.425 [2024-07-16 00:11:16.800665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.425 00:11:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.425 Malloc1 00:27:42.425 [2024-07-16 00:11:16.873416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.425 Malloc2 00:27:42.683 Malloc3 00:27:42.683 Malloc4 00:27:42.683 Malloc5 00:27:42.683 Malloc6 00:27:42.683 Malloc7 00:27:42.683 Malloc8 00:27:42.941 Malloc9 00:27:42.941 Malloc10 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1327872 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1327872 /var/tmp/bdevperf.sock 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1327872 ']' 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.941 { 00:27:42.941 "params": { 00:27:42.941 "name": "Nvme$subsystem", 00:27:42.941 "trtype": "$TEST_TRANSPORT", 00:27:42.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.941 "adrfam": "ipv4", 00:27:42.941 "trsvcid": "$NVMF_PORT", 00:27:42.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.941 "hdgst": ${hdgst:-false}, 00:27:42.941 "ddgst": ${ddgst:-false} 00:27:42.941 }, 00:27:42.941 "method": "bdev_nvme_attach_controller" 00:27:42.941 } 00:27:42.941 EOF 00:27:42.941 )") 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.941 { 00:27:42.941 "params": { 00:27:42.941 "name": "Nvme$subsystem", 00:27:42.941 "trtype": "$TEST_TRANSPORT", 00:27:42.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.941 "adrfam": "ipv4", 00:27:42.941 "trsvcid": "$NVMF_PORT", 00:27:42.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.941 "hdgst": ${hdgst:-false}, 00:27:42.941 "ddgst": ${ddgst:-false} 00:27:42.941 }, 00:27:42.941 "method": "bdev_nvme_attach_controller" 00:27:42.941 } 00:27:42.941 EOF 00:27:42.941 )") 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.941 { 00:27:42.941 "params": { 00:27:42.941 "name": "Nvme$subsystem", 00:27:42.941 "trtype": "$TEST_TRANSPORT", 00:27:42.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.941 "adrfam": "ipv4", 00:27:42.941 "trsvcid": "$NVMF_PORT", 00:27:42.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.941 "hdgst": ${hdgst:-false}, 00:27:42.941 "ddgst": ${ddgst:-false} 00:27:42.941 }, 00:27:42.941 "method": "bdev_nvme_attach_controller" 00:27:42.941 } 00:27:42.941 EOF 00:27:42.941 )") 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.941 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.941 { 00:27:42.941 "params": { 00:27:42.941 "name": "Nvme$subsystem", 00:27:42.941 "trtype": "$TEST_TRANSPORT", 00:27:42.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.941 "adrfam": "ipv4", 00:27:42.941 "trsvcid": "$NVMF_PORT", 00:27:42.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.941 "hdgst": ${hdgst:-false}, 00:27:42.941 "ddgst": ${ddgst:-false} 00:27:42.941 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.942 { 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme$subsystem", 00:27:42.942 "trtype": "$TEST_TRANSPORT", 00:27:42.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "$NVMF_PORT", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.942 "hdgst": ${hdgst:-false}, 00:27:42.942 "ddgst": ${ddgst:-false} 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 } 00:27:42.942 EOF 00:27:42.942 )") 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:42.942 00:11:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme1", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme2", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme3", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme4", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme5", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme6", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme7", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme8", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme9", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.942 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:42.942 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:42.942 "hdgst": false, 00:27:42.942 "ddgst": false 00:27:42.942 }, 00:27:42.942 "method": "bdev_nvme_attach_controller" 00:27:42.942 },{ 00:27:42.942 "params": { 00:27:42.942 "name": "Nvme10", 00:27:42.942 "trtype": "tcp", 00:27:42.942 "traddr": "10.0.0.2", 00:27:42.942 "adrfam": "ipv4", 00:27:42.942 "trsvcid": "4420", 00:27:42.943 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:42.943 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:42.943 "hdgst": false, 00:27:42.943 "ddgst": false 00:27:42.943 }, 00:27:42.943 "method": "bdev_nvme_attach_controller" 00:27:42.943 }' 00:27:42.943 [2024-07-16 00:11:17.354991] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:42.943 [2024-07-16 00:11:17.355084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327872 ] 00:27:42.943 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.943 [2024-07-16 00:11:17.417681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.202 [2024-07-16 00:11:17.505043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.572 Running I/O for 10 seconds... 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:45.137 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:45.393 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.651 00:11:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1327872 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1327872 ']' 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1327872 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1327872 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1327872' 00:27:45.651 killing process with pid 1327872 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1327872 00:27:45.651 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1327872 00:27:45.651 Received shutdown signal, test time was about 1.070291 seconds 00:27:45.651 00:27:45.651 Latency(us) 00:27:45.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.651 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme1n1 : 1.04 184.20 11.51 0.00 0.00 342933.81 38641.97 276513.37 00:27:45.651 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme2n1 : 1.06 210.02 13.13 0.00 0.00 287602.01 22039.51 301368.51 00:27:45.651 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme3n1 : 1.07 239.39 14.96 0.00 0.00 252340.72 18058.81 302921.96 00:27:45.651 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme4n1 : 1.07 240.33 15.02 0.00 0.00 244437.52 28544.57 288940.94 00:27:45.651 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme5n1 : 1.05 182.46 11.40 0.00 0.00 315513.36 27767.85 301368.51 00:27:45.651 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme6n1 : 1.05 182.00 11.37 0.00 0.00 308972.40 26602.76 327777.09 00:27:45.651 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme7n1 : 1.03 186.01 11.63 0.00 0.00 293085.23 20680.25 304475.40 00:27:45.651 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme8n1 : 1.03 186.26 11.64 0.00 0.00 285194.62 21068.61 276513.37 00:27:45.651 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme9n1 : 1.06 180.59 11.29 0.00 0.00 289197.32 26991.12 329330.54 00:27:45.651 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.651 Verification LBA range: start 0x0 length 0x400 00:27:45.651 Nvme10n1 : 1.05 183.47 11.47 0.00 0.00 276325.77 26214.40 301368.51 00:27:45.651 =================================================================================================================== 00:27:45.651 Total : 1974.73 123.42 0.00 0.00 286995.96 18058.81 329330.54 00:27:45.910 00:11:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1327729 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:46.846 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:46.846 rmmod nvme_tcp 00:27:46.846 rmmod nvme_fabrics 00:27:46.846 rmmod nvme_keyring 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1327729 ']' 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1327729 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1327729 ']' 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1327729 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1327729 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1327729' 00:27:47.106 killing process with pid 1327729 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1327729 00:27:47.106 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1327729 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.402 00:11:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:49.304 00:27:49.304 real 0m7.512s 00:27:49.304 user 0m22.836s 00:27:49.304 sys 0m1.419s 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:49.304 ************************************ 00:27:49.304 END TEST nvmf_shutdown_tc2 00:27:49.304 ************************************ 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:49.304 00:11:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:49.562 ************************************ 00:27:49.562 START TEST nvmf_shutdown_tc3 00:27:49.562 ************************************ 00:27:49.562 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:49.562 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:49.562 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:49.562 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:49.563 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:49.563 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:49.563 Found net devices under 0000:08:00.0: cvl_0_0 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:49.563 Found net devices under 0000:08:00.1: cvl_0_1 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.563 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:49.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:27:49.563 00:27:49.563 --- 10.0.0.2 ping statistics --- 00:27:49.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.563 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:27:49.564 00:27:49.564 --- 10.0.0.1 ping statistics --- 00:27:49.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.564 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1328566 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1328566 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1328566 ']' 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:49.564 00:11:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 [2024-07-16 00:11:24.039532] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:49.564 [2024-07-16 00:11:24.039640] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.564 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.821 [2024-07-16 00:11:24.109360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.821 [2024-07-16 00:11:24.196960] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.821 [2024-07-16 00:11:24.197018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.821 [2024-07-16 00:11:24.197033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.821 [2024-07-16 00:11:24.197047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.821 [2024-07-16 00:11:24.197058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.821 [2024-07-16 00:11:24.197113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.821 [2024-07-16 00:11:24.197170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.821 [2024-07-16 00:11:24.197478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:49.821 [2024-07-16 00:11:24.197513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.821 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:49.821 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:49.821 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:49.821 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.821 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.079 [2024-07-16 00:11:24.346816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.079 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.079 Malloc1 00:27:50.079 [2024-07-16 00:11:24.433175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.079 Malloc2 00:27:50.079 Malloc3 00:27:50.079 Malloc4 00:27:50.079 Malloc5 00:27:50.337 Malloc6 00:27:50.337 Malloc7 00:27:50.337 Malloc8 00:27:50.337 Malloc9 00:27:50.337 Malloc10 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1328702 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1328702 /var/tmp/bdevperf.sock 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1328702 ']' 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.611 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.611 { 00:27:50.611 "params": { 00:27:50.611 "name": "Nvme$subsystem", 00:27:50.611 "trtype": "$TEST_TRANSPORT", 00:27:50.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.612 EOF 00:27:50.612 )") 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.612 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.612 { 00:27:50.612 "params": { 00:27:50.612 "name": "Nvme$subsystem", 00:27:50.612 "trtype": "$TEST_TRANSPORT", 00:27:50.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.612 "adrfam": "ipv4", 00:27:50.612 "trsvcid": "$NVMF_PORT", 00:27:50.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.612 "hdgst": ${hdgst:-false}, 00:27:50.612 "ddgst": ${ddgst:-false} 00:27:50.612 }, 00:27:50.612 "method": "bdev_nvme_attach_controller" 00:27:50.612 } 00:27:50.613 EOF 00:27:50.613 )") 00:27:50.613 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:50.613 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:50.613 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:50.613 00:11:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme1", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme2", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme3", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme4", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme5", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme6", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme7", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme8", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme9", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 },{ 00:27:50.613 "params": { 00:27:50.613 "name": "Nvme10", 00:27:50.613 "trtype": "tcp", 00:27:50.613 "traddr": "10.0.0.2", 00:27:50.613 "adrfam": "ipv4", 00:27:50.613 "trsvcid": "4420", 00:27:50.613 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:50.613 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:50.613 "hdgst": false, 00:27:50.613 "ddgst": false 00:27:50.613 }, 00:27:50.613 "method": "bdev_nvme_attach_controller" 00:27:50.613 }' 00:27:50.613 [2024-07-16 00:11:24.927573] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:50.613 [2024-07-16 00:11:24.927665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328702 ] 00:27:50.613 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.613 [2024-07-16 00:11:24.989110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.613 [2024-07-16 00:11:25.076331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.509 Running I/O for 10 seconds... 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.509 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.767 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.767 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:52.767 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:52.767 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:53.025 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1328566 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1328566 ']' 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1328566 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1328566 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1328566' 00:27:53.302 killing process with pid 1328566 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1328566 00:27:53.302 00:11:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1328566 00:27:53.302 [2024-07-16 00:11:27.659877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.659998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.302 [2024-07-16 00:11:27.660661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.660883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa350 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.662346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acd50 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.662383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20acd50 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.663994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.664311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa7f0 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.303 [2024-07-16 00:11:27.666457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.666997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.667816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aac90 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.304 [2024-07-16 00:11:27.669998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.670407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab150 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.671994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.672007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.305 [2024-07-16 00:11:27.672021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.672249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5f0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.673993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.306 [2024-07-16 00:11:27.674133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abab0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.674844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.674888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.674908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.674924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.674939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.674954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.674970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.674985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.674999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ad060 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e150 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e4fe0 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.675536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:53.307 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.675621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:27:53.307 id:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5980 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675785] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:53.307 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ddfa0 is same [2024-07-16 00:11:27.675838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with with the state(5) to be set 00:27:53.307 the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.675891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:27:53.307 id:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.675954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.675978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:27:53.307 id:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.675996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.675998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.676022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.676027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:27:53.307 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.676045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5950 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.307 [2024-07-16 00:11:27.676118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.307 [2024-07-16 00:11:27.676150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.307 [2024-07-16 00:11:27.676159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with [2024-07-16 00:11:27.676268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13c20 is same the state(5) to be set 00:27:53.308 with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-16 00:11:27.676365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:53.308 the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.308 [2024-07-16 00:11:27.676446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.676445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d7610 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.676979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.677155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20abf50 is same with the state(5) to be set 00:27:53.308 [2024-07-16 00:11:27.679279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.308 [2024-07-16 00:11:27.679867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.308 [2024-07-16 00:11:27.679886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.679904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.679937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.679952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.679969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.679985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with [2024-07-16 00:11:27.680959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1the state(5) to be set 00:27:53.309 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.680979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.680994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.681004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.681019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.681033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.681047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with [2024-07-16 00:11:27.681061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1the state(5) to be set 00:27:53.309 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.681084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with [2024-07-16 00:11:27.681085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:53.309 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.681100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.681115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.309 [2024-07-16 00:11:27.681129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.309 [2024-07-16 00:11:27.681145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.309 [2024-07-16 00:11:27.681152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:11:27.681243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:11:27.681349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with [2024-07-16 00:11:27.681433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1the state(5) to be set 00:27:53.310 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.310 [2024-07-16 00:11:27.681510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.310 [2024-07-16 00:11:27.681525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.310 [2024-07-16 00:11:27.681886] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac3f0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.681929] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b68690 was disconnected and freed. reset controller. 00:27:53.312 [2024-07-16 00:11:27.682746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.682995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac8b0 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.683809] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:53.312 [2024-07-16 00:11:27.683854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13c20 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.684844] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.684915] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.684978] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.685042] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.685104] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.685188] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.312 [2024-07-16 00:11:27.686060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-07-16 00:11:27.686093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a13c20 with addr=10.0.0.2, port=4420 00:27:53.312 [2024-07-16 00:11:27.686118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13c20 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.686152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ad060 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1e150 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e4fe0 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5980 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ddfa0 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5950 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6f50 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.686565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7610 (9): Bad file descriptor 00:27:53.312 [2024-07-16 00:11:27.686621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.312 [2024-07-16 00:11:27.686748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a15760 is same with the state(5) to be set 00:27:53.312 [2024-07-16 00:11:27.686847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.312 [2024-07-16 00:11:27.686870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.312 [2024-07-16 00:11:27.686910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.312 [2024-07-16 00:11:27.686944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.312 [2024-07-16 00:11:27.686962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.686978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.686995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.687980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.687997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.313 [2024-07-16 00:11:27.688416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.313 [2024-07-16 00:11:27.688431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.688965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.688982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a9d30 is same with the state(5) to be set 00:27:53.314 [2024-07-16 00:11:27.689159] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15a9d30 was disconnected and freed. reset controller. 00:27:53.314 [2024-07-16 00:11:27.689410] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.314 [2024-07-16 00:11:27.689556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.689969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.689987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.690002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.690020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.690036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.690054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.690069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.314 [2024-07-16 00:11:27.705337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.314 [2024-07-16 00:11:27.705354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.705977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.705992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.315 [2024-07-16 00:11:27.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.315 [2024-07-16 00:11:27.706746] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b671c0 was disconnected and freed. reset controller. 00:27:53.315 [2024-07-16 00:11:27.706856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13c20 (9): Bad file descriptor 00:27:53.315 [2024-07-16 00:11:27.706936] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.316 [2024-07-16 00:11:27.706991] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.316 [2024-07-16 00:11:27.707022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6f50 (9): Bad file descriptor 00:27:53.316 [2024-07-16 00:11:27.707058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a15760 (9): Bad file descriptor 00:27:53.316 [2024-07-16 00:11:27.709895] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:53.316 [2024-07-16 00:11:27.709956] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:53.316 [2024-07-16 00:11:27.709977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:53.316 [2024-07-16 00:11:27.710063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.710973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.710991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.316 [2024-07-16 00:11:27.711198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.316 [2024-07-16 00:11:27.711214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.711986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.712266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.712283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a8a50 is same with the state(5) to be set 00:27:53.317 [2024-07-16 00:11:27.713767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.713980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.713998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.714014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.714031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.714055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.317 [2024-07-16 00:11:27.714074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.317 [2024-07-16 00:11:27.714090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.714973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.714990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.318 [2024-07-16 00:11:27.715384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.318 [2024-07-16 00:11:27.715401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.715951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.715968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d70 is same with the state(5) to be set 00:27:53.319 [2024-07-16 00:11:27.717467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.717981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.717997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.319 [2024-07-16 00:11:27.718363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.319 [2024-07-16 00:11:27.718378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.718964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.718983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.320 [2024-07-16 00:11:27.719574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.320 [2024-07-16 00:11:27.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.719616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.719634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.719668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.719684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.719701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae62e0 is same with the state(5) to be set 00:27:53.321 [2024-07-16 00:11:27.721154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.721975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.721990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.321 [2024-07-16 00:11:27.722527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.321 [2024-07-16 00:11:27.722542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.722965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.722980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.732909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.732979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.732999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.733380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.733397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b9b80 is same with the state(5) to be set 00:27:53.322 [2024-07-16 00:11:27.734889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.734918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.734948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.734965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.734990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.322 [2024-07-16 00:11:27.735479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.322 [2024-07-16 00:11:27.735494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.735982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.735997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.323 [2024-07-16 00:11:27.736911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.323 [2024-07-16 00:11:27.736929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.736962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.736978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.736995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.737011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.737028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.737044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.737061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.737076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.737093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae7b20 is same with the state(5) to be set 00:27:53.324 [2024-07-16 00:11:27.738625] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:53.324 [2024-07-16 00:11:27.738733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:53.324 [2024-07-16 00:11:27.738766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.324 [2024-07-16 00:11:27.738786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.324 [2024-07-16 00:11:27.738879] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.738905] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.738930] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.738956] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.738976] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.738999] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.324 [2024-07-16 00:11:27.739448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.739999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.324 [2024-07-16 00:11:27.740344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.324 [2024-07-16 00:11:27.740362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.740968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.740983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.325 [2024-07-16 00:11:27.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.325 [2024-07-16 00:11:27.741651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8d70 is same with the state(5) to be set 00:27:53.325 [2024-07-16 00:11:27.743474] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:53.325 [2024-07-16 00:11:27.743533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:53.325 [2024-07-16 00:11:27.743555] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:53.325 [2024-07-16 00:11:27.743574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:53.325 [2024-07-16 00:11:27.743593] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:53.325 [2024-07-16 00:11:27.743823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-07-16 00:11:27.743864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ddfa0 with addr=10.0.0.2, port=4420 00:27:53.325 [2024-07-16 00:11:27.743883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ddfa0 is same with the state(5) to be set 00:27:53.325 [2024-07-16 00:11:27.744001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-07-16 00:11:27.744028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5980 with addr=10.0.0.2, port=4420 00:27:53.325 [2024-07-16 00:11:27.744045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5980 is same with the state(5) to be set 00:27:53.325 [2024-07-16 00:11:27.745644] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:53.326 [2024-07-16 00:11:27.745873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-07-16 00:11:27.745902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e5950 with addr=10.0.0.2, port=4420 00:27:53.326 [2024-07-16 00:11:27.745920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e5950 is same with the state(5) to be set 00:27:53.326 [2024-07-16 00:11:27.746048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-07-16 00:11:27.746073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e4fe0 with addr=10.0.0.2, port=4420 00:27:53.326 [2024-07-16 00:11:27.746107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e4fe0 is same with the state(5) to be set 00:27:53.326 [2024-07-16 00:11:27.746237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-07-16 00:11:27.746281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ad060 with addr=10.0.0.2, port=4420 00:27:53.326 [2024-07-16 00:11:27.746300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ad060 is same with the state(5) to be set 00:27:53.326 [2024-07-16 00:11:27.746461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-07-16 00:11:27.746499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1e150 with addr=10.0.0.2, port=4420 00:27:53.326 [2024-07-16 00:11:27.746519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1e150 is same with the state(5) to be set 00:27:53.326 [2024-07-16 00:11:27.746650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-07-16 00:11:27.746676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f6f50 with addr=10.0.0.2, port=4420 00:27:53.326 [2024-07-16 00:11:27.746692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f6f50 is same with the state(5) to be set 00:27:53.326 [2024-07-16 00:11:27.746719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ddfa0 (9): Bad file descriptor 00:27:53.326 [2024-07-16 00:11:27.746741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5980 (9): Bad file descriptor 00:27:53.326 [2024-07-16 00:11:27.747183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.747972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.326 [2024-07-16 00:11:27.748366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.326 [2024-07-16 00:11:27.748382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.748966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.748982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.327 [2024-07-16 00:11:27.749366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.327 [2024-07-16 00:11:27.749383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b65ea0 is same with the state(5) to be set 00:27:53.327 [2024-07-16 00:11:27.751777] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:53.327 task offset: 16384 on job bdev=Nvme10n1 fails 00:27:53.327 00:27:53.327 Latency(us) 00:27:53.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.327 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme1n1 ended in about 0.98 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme1n1 : 0.98 130.08 8.13 65.04 0.00 323880.83 25243.50 296708.17 00:27:53.327 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme2n1 ended in about 0.98 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme2n1 : 0.98 130.77 8.17 65.38 0.00 314461.04 25437.68 320009.86 00:27:53.327 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme3n1 ended in about 0.99 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme3n1 : 0.99 133.64 8.35 64.80 0.00 303429.20 35923.44 306028.85 00:27:53.327 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme4n1 ended in about 0.99 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme4n1 : 0.99 129.11 8.07 64.55 0.00 303430.16 19320.98 309135.74 00:27:53.327 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme5n1 ended in about 1.01 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme5n1 : 1.01 127.35 7.96 63.68 0.00 300397.61 26796.94 302921.96 00:27:53.327 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme6n1 ended in about 1.01 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.327 Nvme6n1 : 1.01 126.89 7.93 63.44 0.00 294006.83 22136.60 279620.27 00:27:53.327 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.327 Job: Nvme7n1 ended in about 1.01 seconds with error 00:27:53.327 Verification LBA range: start 0x0 length 0x400 00:27:53.328 Nvme7n1 : 1.01 126.32 7.89 63.16 0.00 288054.49 45438.29 262532.36 00:27:53.328 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.328 Job: Nvme8n1 ended in about 1.02 seconds with error 00:27:53.328 Verification LBA range: start 0x0 length 0x400 00:27:53.328 Nvme8n1 : 1.02 125.36 7.84 62.68 0.00 283414.88 23884.23 279620.27 00:27:53.328 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.328 Job: Nvme9n1 ended in about 0.98 seconds with error 00:27:53.328 Verification LBA range: start 0x0 length 0x400 00:27:53.328 Nvme9n1 : 0.98 135.68 8.48 60.19 0.00 261637.82 20291.89 312242.63 00:27:53.328 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:53.328 Job: Nvme10n1 ended in about 0.95 seconds with error 00:27:53.328 Verification LBA range: start 0x0 length 0x400 00:27:53.328 Nvme10n1 : 0.95 134.13 8.38 67.07 0.00 246220.36 4199.16 346418.44 00:27:53.328 =================================================================================================================== 00:27:53.328 Total : 1299.33 81.21 639.99 0.00 291917.31 4199.16 346418.44 00:27:53.328 [2024-07-16 00:11:27.778382] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:53.328 [2024-07-16 00:11:27.778466] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:53.328 [2024-07-16 00:11:27.778773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-07-16 00:11:27.778808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d7610 with addr=10.0.0.2, port=4420 00:27:53.328 [2024-07-16 00:11:27.778830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d7610 is same with the state(5) to be set 00:27:53.328 [2024-07-16 00:11:27.778857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e5950 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.778880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e4fe0 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.778900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ad060 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.778920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1e150 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.778940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f6f50 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.778959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.778974] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.778991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:53.328 [2024-07-16 00:11:27.779019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.779035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.779049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.328 [2024-07-16 00:11:27.779118] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.779153] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.779629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.779658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.779820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-07-16 00:11:27.779849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a13c20 with addr=10.0.0.2, port=4420 00:27:53.328 [2024-07-16 00:11:27.779867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13c20 is same with the state(5) to be set 00:27:53.328 [2024-07-16 00:11:27.779961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-07-16 00:11:27.779986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a15760 with addr=10.0.0.2, port=4420 00:27:53.328 [2024-07-16 00:11:27.780003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a15760 is same with the state(5) to be set 00:27:53.328 [2024-07-16 00:11:27.780023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7610 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.780041] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780055] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780069] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:53.328 [2024-07-16 00:11:27.780089] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780104] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780118] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:53.328 [2024-07-16 00:11:27.780162] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780196] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:53.328 [2024-07-16 00:11:27.780215] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780230] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780244] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:53.328 [2024-07-16 00:11:27.780263] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780278] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:53.328 [2024-07-16 00:11:27.780314] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780337] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780357] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780377] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780397] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780417] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:53.328 [2024-07-16 00:11:27.780817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.780844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.780857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.780870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.780883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.780912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13c20 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.780935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a15760 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.780952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.780966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.780981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:53.328 [2024-07-16 00:11:27.781047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:53.328 [2024-07-16 00:11:27.781074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:53.328 [2024-07-16 00:11:27.781092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.781125] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.781149] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.781165] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:53.328 [2024-07-16 00:11:27.781184] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.781199] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.781212] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:53.328 [2024-07-16 00:11:27.781270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.781290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.781445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-07-16 00:11:27.781474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a5980 with addr=10.0.0.2, port=4420 00:27:53.328 [2024-07-16 00:11:27.781493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a5980 is same with the state(5) to be set 00:27:53.328 [2024-07-16 00:11:27.781616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-07-16 00:11:27.781642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ddfa0 with addr=10.0.0.2, port=4420 00:27:53.328 [2024-07-16 00:11:27.781659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ddfa0 is same with the state(5) to be set 00:27:53.328 [2024-07-16 00:11:27.781709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a5980 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.781735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ddfa0 (9): Bad file descriptor 00:27:53.328 [2024-07-16 00:11:27.781780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.781800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.781823] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:53.328 [2024-07-16 00:11:27.781841] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:53.328 [2024-07-16 00:11:27.781856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:53.328 [2024-07-16 00:11:27.781870] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:53.328 [2024-07-16 00:11:27.781914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.328 [2024-07-16 00:11:27.781933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.588 00:11:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:53.588 00:11:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1328702 00:27:54.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1328702) - No such process 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.966 rmmod nvme_tcp 00:27:54.966 rmmod nvme_fabrics 00:27:54.966 rmmod nvme_keyring 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.966 00:11:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.871 00:27:56.871 real 0m7.357s 00:27:56.871 user 0m18.154s 00:27:56.871 sys 0m1.345s 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.871 ************************************ 00:27:56.871 END TEST nvmf_shutdown_tc3 00:27:56.871 ************************************ 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:56.871 00:27:56.871 real 0m26.450s 00:27:56.871 user 1m15.550s 00:27:56.871 sys 0m5.693s 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:56.871 ************************************ 00:27:56.871 END TEST nvmf_shutdown 00:27:56.871 ************************************ 00:27:56.871 00:11:31 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.871 00:11:31 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.871 00:11:31 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:56.871 00:11:31 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.871 ************************************ 00:27:56.871 START TEST nvmf_multicontroller 00:27:56.871 ************************************ 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.871 * Looking for test storage... 00:27:56.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:56.871 00:11:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:58.775 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:58.775 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.775 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:58.776 Found net devices under 0000:08:00.0: cvl_0_0 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:58.776 Found net devices under 0000:08:00.1: cvl_0_1 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.776 00:11:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:27:58.776 00:27:58.776 --- 10.0.0.2 ping statistics --- 00:27:58.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.776 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:58.776 00:27:58.776 --- 10.0.0.1 ping statistics --- 00:27:58.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.776 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1330539 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1330539 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1330539 ']' 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.776 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:58.776 [2024-07-16 00:11:33.122690] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:58.776 [2024-07-16 00:11:33.122775] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.776 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.776 [2024-07-16 00:11:33.186185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.776 [2024-07-16 00:11:33.273736] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.776 [2024-07-16 00:11:33.273791] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.776 [2024-07-16 00:11:33.273807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.776 [2024-07-16 00:11:33.273828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.776 [2024-07-16 00:11:33.273840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.776 [2024-07-16 00:11:33.273922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.776 [2024-07-16 00:11:33.273972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.776 [2024-07-16 00:11:33.273976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.033 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.033 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:59.033 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.033 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.033 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 [2024-07-16 00:11:33.400913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 Malloc0 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 [2024-07-16 00:11:33.465513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 [2024-07-16 00:11:33.473417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 Malloc1 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1330646 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1330646 /var/tmp/bdevperf.sock 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1330646 ']' 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:59.034 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 NVMe0n1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.600 1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 request: 00:27:59.600 { 00:27:59.600 "name": "NVMe0", 00:27:59.600 "trtype": "tcp", 00:27:59.600 "traddr": "10.0.0.2", 00:27:59.600 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:59.600 "hostaddr": "10.0.0.2", 00:27:59.600 "hostsvcid": "60000", 00:27:59.600 "adrfam": "ipv4", 00:27:59.600 "trsvcid": "4420", 00:27:59.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.600 "method": "bdev_nvme_attach_controller", 00:27:59.600 "req_id": 1 00:27:59.600 } 00:27:59.600 Got JSON-RPC error response 00:27:59.600 response: 00:27:59.600 { 00:27:59.600 "code": -114, 00:27:59.600 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.600 } 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 request: 00:27:59.600 { 00:27:59.600 "name": "NVMe0", 00:27:59.600 "trtype": "tcp", 00:27:59.600 "traddr": "10.0.0.2", 00:27:59.600 "hostaddr": "10.0.0.2", 00:27:59.600 "hostsvcid": "60000", 00:27:59.600 "adrfam": "ipv4", 00:27:59.600 "trsvcid": "4420", 00:27:59.600 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:59.600 "method": "bdev_nvme_attach_controller", 00:27:59.600 "req_id": 1 00:27:59.600 } 00:27:59.600 Got JSON-RPC error response 00:27:59.600 response: 00:27:59.600 { 00:27:59.600 "code": -114, 00:27:59.600 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.600 } 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 request: 00:27:59.600 { 00:27:59.600 "name": "NVMe0", 00:27:59.600 "trtype": "tcp", 00:27:59.600 "traddr": "10.0.0.2", 00:27:59.600 "hostaddr": "10.0.0.2", 00:27:59.600 "hostsvcid": "60000", 00:27:59.600 "adrfam": "ipv4", 00:27:59.600 "trsvcid": "4420", 00:27:59.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.600 "multipath": "disable", 00:27:59.600 "method": "bdev_nvme_attach_controller", 00:27:59.600 "req_id": 1 00:27:59.600 } 00:27:59.600 Got JSON-RPC error response 00:27:59.600 response: 00:27:59.600 { 00:27:59.600 "code": -114, 00:27:59.600 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:59.600 } 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.600 request: 00:27:59.600 { 00:27:59.600 "name": "NVMe0", 00:27:59.600 "trtype": "tcp", 00:27:59.600 "traddr": "10.0.0.2", 00:27:59.600 "hostaddr": "10.0.0.2", 00:27:59.600 "hostsvcid": "60000", 00:27:59.600 "adrfam": "ipv4", 00:27:59.600 "trsvcid": "4420", 00:27:59.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.600 "multipath": "failover", 00:27:59.600 "method": "bdev_nvme_attach_controller", 00:27:59.600 "req_id": 1 00:27:59.600 } 00:27:59.600 Got JSON-RPC error response 00:27:59.600 response: 00:27:59.600 { 00:27:59.600 "code": -114, 00:27:59.600 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:59.600 } 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.600 00:11:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.858 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.858 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.115 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:00.115 00:11:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:01.048 0 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1330646 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1330646 ']' 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1330646 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.048 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1330646 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1330646' 00:28:01.306 killing process with pid 1330646 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1330646 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1330646 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:01.306 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:01.306 [2024-07-16 00:11:33.576450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:01.306 [2024-07-16 00:11:33.576560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330646 ] 00:28:01.306 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.306 [2024-07-16 00:11:33.636788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.306 [2024-07-16 00:11:33.724029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.306 [2024-07-16 00:11:34.382795] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 29e609c9-3ab6-4b47-8e6a-bb78d905f140 already exists 00:28:01.306 [2024-07-16 00:11:34.382837] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:29e609c9-3ab6-4b47-8e6a-bb78d905f140 alias for bdev NVMe1n1 00:28:01.306 [2024-07-16 00:11:34.382858] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:01.306 Running I/O for 1 seconds... 00:28:01.306 00:28:01.306 Latency(us) 00:28:01.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.306 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:01.306 NVMe0n1 : 1.00 16845.28 65.80 0.00 0.00 7584.83 6213.78 14272.28 00:28:01.306 =================================================================================================================== 00:28:01.306 Total : 16845.28 65.80 0.00 0.00 7584.83 6213.78 14272.28 00:28:01.306 Received shutdown signal, test time was about 1.000000 seconds 00:28:01.306 00:28:01.306 Latency(us) 00:28:01.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.306 =================================================================================================================== 00:28:01.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.306 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:01.306 rmmod nvme_tcp 00:28:01.306 rmmod nvme_fabrics 00:28:01.306 rmmod nvme_keyring 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1330539 ']' 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1330539 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1330539 ']' 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1330539 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.306 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1330539 00:28:01.566 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:01.566 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:01.566 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1330539' 00:28:01.566 killing process with pid 1330539 00:28:01.566 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1330539 00:28:01.566 00:11:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1330539 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.566 00:11:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.139 00:11:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:04.139 00:28:04.139 real 0m6.805s 00:28:04.139 user 0m11.121s 00:28:04.139 sys 0m1.912s 00:28:04.139 00:11:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.139 00:11:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 ************************************ 00:28:04.139 END TEST nvmf_multicontroller 00:28:04.139 ************************************ 00:28:04.139 00:11:38 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:04.139 00:11:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:04.139 00:11:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:04.139 00:11:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:04.139 ************************************ 00:28:04.139 START TEST nvmf_aer 00:28:04.139 ************************************ 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:04.139 * Looking for test storage... 00:28:04.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.139 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:04.140 00:11:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:05.514 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:05.514 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:05.514 Found net devices under 0000:08:00.0: cvl_0_0 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:05.514 Found net devices under 0000:08:00.1: cvl_0_1 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.514 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:28:05.515 00:28:05.515 --- 10.0.0.2 ping statistics --- 00:28:05.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.515 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:05.515 00:28:05.515 --- 10.0.0.1 ping statistics --- 00:28:05.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.515 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1332239 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1332239 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1332239 ']' 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.515 00:11:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.515 [2024-07-16 00:11:39.864442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:05.515 [2024-07-16 00:11:39.864543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.515 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.515 [2024-07-16 00:11:39.931736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.515 [2024-07-16 00:11:40.024226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.515 [2024-07-16 00:11:40.024282] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.515 [2024-07-16 00:11:40.024298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.515 [2024-07-16 00:11:40.024312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.515 [2024-07-16 00:11:40.024324] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.515 [2024-07-16 00:11:40.024414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.515 [2024-07-16 00:11:40.024742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.773 [2024-07-16 00:11:40.028164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.773 [2024-07-16 00:11:40.028316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 [2024-07-16 00:11:40.167833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 Malloc0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 [2024-07-16 00:11:40.218098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.773 [ 00:28:05.773 { 00:28:05.773 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.773 "subtype": "Discovery", 00:28:05.773 "listen_addresses": [], 00:28:05.773 "allow_any_host": true, 00:28:05.773 "hosts": [] 00:28:05.773 }, 00:28:05.773 { 00:28:05.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.773 "subtype": "NVMe", 00:28:05.773 "listen_addresses": [ 00:28:05.773 { 00:28:05.773 "trtype": "TCP", 00:28:05.773 "adrfam": "IPv4", 00:28:05.773 "traddr": "10.0.0.2", 00:28:05.773 "trsvcid": "4420" 00:28:05.773 } 00:28:05.773 ], 00:28:05.773 "allow_any_host": true, 00:28:05.773 "hosts": [], 00:28:05.773 "serial_number": "SPDK00000000000001", 00:28:05.773 "model_number": "SPDK bdev Controller", 00:28:05.773 "max_namespaces": 2, 00:28:05.773 "min_cntlid": 1, 00:28:05.773 "max_cntlid": 65519, 00:28:05.773 "namespaces": [ 00:28:05.773 { 00:28:05.773 "nsid": 1, 00:28:05.773 "bdev_name": "Malloc0", 00:28:05.773 "name": "Malloc0", 00:28:05.773 "nguid": "ACE10E06A822464BB13D5AA9300165B6", 00:28:05.773 "uuid": "ace10e06-a822-464b-b13d-5aa9300165b6" 00:28:05.773 } 00:28:05.773 ] 00:28:05.773 } 00:28:05.773 ] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1332352 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:05.773 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:05.773 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.032 Malloc1 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.032 [ 00:28:06.032 { 00:28:06.032 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:06.032 "subtype": "Discovery", 00:28:06.032 "listen_addresses": [], 00:28:06.032 "allow_any_host": true, 00:28:06.032 "hosts": [] 00:28:06.032 }, 00:28:06.032 { 00:28:06.032 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.032 "subtype": "NVMe", 00:28:06.032 "listen_addresses": [ 00:28:06.032 { 00:28:06.032 "trtype": "TCP", 00:28:06.032 "adrfam": "IPv4", 00:28:06.032 "traddr": "10.0.0.2", 00:28:06.032 "trsvcid": "4420" 00:28:06.032 } 00:28:06.032 ], 00:28:06.032 "allow_any_host": true, 00:28:06.032 "hosts": [], 00:28:06.032 "serial_number": "SPDK00000000000001", 00:28:06.032 "model_number": "SPDK bdev Controller", 00:28:06.032 "max_namespaces": 2, 00:28:06.032 "min_cntlid": 1, 00:28:06.032 "max_cntlid": 65519, 00:28:06.032 "namespaces": [ 00:28:06.032 { 00:28:06.032 "nsid": 1, 00:28:06.032 "bdev_name": "Malloc0", 00:28:06.032 "name": "Malloc0", 00:28:06.032 "nguid": "ACE10E06A822464BB13D5AA9300165B6", 00:28:06.032 "uuid": "ace10e06-a822-464b-b13d-5aa9300165b6" 00:28:06.032 }, 00:28:06.032 { 00:28:06.032 "nsid": 2, 00:28:06.032 "bdev_name": "Malloc1", 00:28:06.032 "name": "Malloc1", 00:28:06.032 "nguid": "98507812827C409D9540170EC36F61D9", 00:28:06.032 "uuid": "98507812-827c-409d-9540-170ec36f61d9" 00:28:06.032 } 00:28:06.032 ] 00:28:06.032 } 00:28:06.032 ] 00:28:06.032 Asynchronous Event Request test 00:28:06.032 Attaching to 10.0.0.2 00:28:06.032 Attached to 10.0.0.2 00:28:06.032 Registering asynchronous event callbacks... 00:28:06.032 Starting namespace attribute notice tests for all controllers... 00:28:06.032 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:06.032 aer_cb - Changed Namespace 00:28:06.032 Cleaning up... 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1332352 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.032 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.291 rmmod nvme_tcp 00:28:06.291 rmmod nvme_fabrics 00:28:06.291 rmmod nvme_keyring 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1332239 ']' 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1332239 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1332239 ']' 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1332239 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1332239 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1332239' 00:28:06.291 killing process with pid 1332239 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1332239 00:28:06.291 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1332239 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.549 00:11:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.452 00:11:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:08.452 00:28:08.452 real 0m4.764s 00:28:08.452 user 0m3.692s 00:28:08.452 sys 0m1.556s 00:28:08.452 00:11:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:08.452 00:11:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 ************************************ 00:28:08.452 END TEST nvmf_aer 00:28:08.452 ************************************ 00:28:08.452 00:11:42 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.452 00:11:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:08.452 00:11:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:08.452 00:11:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.452 ************************************ 00:28:08.452 START TEST nvmf_async_init 00:28:08.452 ************************************ 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:08.452 * Looking for test storage... 00:28:08.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.452 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e6d2b95cbe924e2a8071a67a3fad3f4b 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.453 00:11:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.710 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:08.710 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:08.710 00:11:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:08.710 00:11:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.086 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:10.087 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:10.087 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:10.087 Found net devices under 0000:08:00.0: cvl_0_0 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:10.087 Found net devices under 0000:08:00.1: cvl_0_1 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.087 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:28:10.345 00:28:10.345 --- 10.0.0.2 ping statistics --- 00:28:10.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.345 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:28:10.345 00:28:10.345 --- 10.0.0.1 ping statistics --- 00:28:10.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.345 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1333824 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1333824 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1333824 ']' 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.345 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.345 [2024-07-16 00:11:44.735765] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:10.345 [2024-07-16 00:11:44.735870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.345 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.345 [2024-07-16 00:11:44.804612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.602 [2024-07-16 00:11:44.894535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.602 [2024-07-16 00:11:44.894594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.602 [2024-07-16 00:11:44.894610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.602 [2024-07-16 00:11:44.894623] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.602 [2024-07-16 00:11:44.894635] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.602 [2024-07-16 00:11:44.894665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.602 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:10.602 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:10.602 00:11:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:10.602 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.602 00:11:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.602 [2024-07-16 00:11:45.030749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.602 null0 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.602 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e6d2b95cbe924e2a8071a67a3fad3f4b 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.603 [2024-07-16 00:11:45.070967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.603 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.860 nvme0n1 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.860 [ 00:28:10.860 { 00:28:10.860 "name": "nvme0n1", 00:28:10.860 "aliases": [ 00:28:10.860 "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b" 00:28:10.860 ], 00:28:10.860 "product_name": "NVMe disk", 00:28:10.860 "block_size": 512, 00:28:10.860 "num_blocks": 2097152, 00:28:10.860 "uuid": "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b", 00:28:10.860 "assigned_rate_limits": { 00:28:10.860 "rw_ios_per_sec": 0, 00:28:10.860 "rw_mbytes_per_sec": 0, 00:28:10.860 "r_mbytes_per_sec": 0, 00:28:10.860 "w_mbytes_per_sec": 0 00:28:10.860 }, 00:28:10.860 "claimed": false, 00:28:10.860 "zoned": false, 00:28:10.860 "supported_io_types": { 00:28:10.860 "read": true, 00:28:10.860 "write": true, 00:28:10.860 "unmap": false, 00:28:10.860 "write_zeroes": true, 00:28:10.860 "flush": true, 00:28:10.860 "reset": true, 00:28:10.860 "compare": true, 00:28:10.860 "compare_and_write": true, 00:28:10.860 "abort": true, 00:28:10.860 "nvme_admin": true, 00:28:10.860 "nvme_io": true 00:28:10.860 }, 00:28:10.860 "memory_domains": [ 00:28:10.860 { 00:28:10.860 "dma_device_id": "system", 00:28:10.860 "dma_device_type": 1 00:28:10.860 } 00:28:10.860 ], 00:28:10.860 "driver_specific": { 00:28:10.860 "nvme": [ 00:28:10.860 { 00:28:10.860 "trid": { 00:28:10.860 "trtype": "TCP", 00:28:10.860 "adrfam": "IPv4", 00:28:10.860 "traddr": "10.0.0.2", 00:28:10.860 "trsvcid": "4420", 00:28:10.860 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:10.860 }, 00:28:10.860 "ctrlr_data": { 00:28:10.860 "cntlid": 1, 00:28:10.860 "vendor_id": "0x8086", 00:28:10.860 "model_number": "SPDK bdev Controller", 00:28:10.860 "serial_number": "00000000000000000000", 00:28:10.860 "firmware_revision": "24.05.1", 00:28:10.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.860 "oacs": { 00:28:10.860 "security": 0, 00:28:10.860 "format": 0, 00:28:10.860 "firmware": 0, 00:28:10.860 "ns_manage": 0 00:28:10.860 }, 00:28:10.860 "multi_ctrlr": true, 00:28:10.860 "ana_reporting": false 00:28:10.860 }, 00:28:10.860 "vs": { 00:28:10.860 "nvme_version": "1.3" 00:28:10.860 }, 00:28:10.860 "ns_data": { 00:28:10.860 "id": 1, 00:28:10.860 "can_share": true 00:28:10.860 } 00:28:10.860 } 00:28:10.860 ], 00:28:10.860 "mp_policy": "active_passive" 00:28:10.860 } 00:28:10.860 } 00:28:10.860 ] 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.860 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:10.860 [2024-07-16 00:11:45.323613] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:10.860 [2024-07-16 00:11:45.323714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e8840 (9): Bad file descriptor 00:28:11.119 [2024-07-16 00:11:45.466305] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 [ 00:28:11.119 { 00:28:11.119 "name": "nvme0n1", 00:28:11.119 "aliases": [ 00:28:11.119 "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b" 00:28:11.119 ], 00:28:11.119 "product_name": "NVMe disk", 00:28:11.119 "block_size": 512, 00:28:11.119 "num_blocks": 2097152, 00:28:11.119 "uuid": "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b", 00:28:11.119 "assigned_rate_limits": { 00:28:11.119 "rw_ios_per_sec": 0, 00:28:11.119 "rw_mbytes_per_sec": 0, 00:28:11.119 "r_mbytes_per_sec": 0, 00:28:11.119 "w_mbytes_per_sec": 0 00:28:11.119 }, 00:28:11.119 "claimed": false, 00:28:11.119 "zoned": false, 00:28:11.119 "supported_io_types": { 00:28:11.119 "read": true, 00:28:11.119 "write": true, 00:28:11.119 "unmap": false, 00:28:11.119 "write_zeroes": true, 00:28:11.119 "flush": true, 00:28:11.119 "reset": true, 00:28:11.119 "compare": true, 00:28:11.119 "compare_and_write": true, 00:28:11.119 "abort": true, 00:28:11.119 "nvme_admin": true, 00:28:11.119 "nvme_io": true 00:28:11.119 }, 00:28:11.119 "memory_domains": [ 00:28:11.119 { 00:28:11.119 "dma_device_id": "system", 00:28:11.119 "dma_device_type": 1 00:28:11.119 } 00:28:11.119 ], 00:28:11.119 "driver_specific": { 00:28:11.119 "nvme": [ 00:28:11.119 { 00:28:11.119 "trid": { 00:28:11.119 "trtype": "TCP", 00:28:11.119 "adrfam": "IPv4", 00:28:11.119 "traddr": "10.0.0.2", 00:28:11.119 "trsvcid": "4420", 00:28:11.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:11.119 }, 00:28:11.119 "ctrlr_data": { 00:28:11.119 "cntlid": 2, 00:28:11.119 "vendor_id": "0x8086", 00:28:11.119 "model_number": "SPDK bdev Controller", 00:28:11.119 "serial_number": "00000000000000000000", 00:28:11.119 "firmware_revision": "24.05.1", 00:28:11.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.119 "oacs": { 00:28:11.119 "security": 0, 00:28:11.119 "format": 0, 00:28:11.119 "firmware": 0, 00:28:11.119 "ns_manage": 0 00:28:11.119 }, 00:28:11.119 "multi_ctrlr": true, 00:28:11.119 "ana_reporting": false 00:28:11.119 }, 00:28:11.119 "vs": { 00:28:11.119 "nvme_version": "1.3" 00:28:11.119 }, 00:28:11.119 "ns_data": { 00:28:11.119 "id": 1, 00:28:11.119 "can_share": true 00:28:11.119 } 00:28:11.119 } 00:28:11.119 ], 00:28:11.119 "mp_policy": "active_passive" 00:28:11.119 } 00:28:11.119 } 00:28:11.119 ] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yQXp69UFdS 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yQXp69UFdS 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 [2024-07-16 00:11:45.524300] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:11.119 [2024-07-16 00:11:45.524426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yQXp69UFdS 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 [2024-07-16 00:11:45.532302] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yQXp69UFdS 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 [2024-07-16 00:11:45.540330] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:11.119 [2024-07-16 00:11:45.540393] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:11.119 nvme0n1 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.119 [ 00:28:11.119 { 00:28:11.119 "name": "nvme0n1", 00:28:11.119 "aliases": [ 00:28:11.119 "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b" 00:28:11.119 ], 00:28:11.119 "product_name": "NVMe disk", 00:28:11.119 "block_size": 512, 00:28:11.119 "num_blocks": 2097152, 00:28:11.119 "uuid": "e6d2b95c-be92-4e2a-8071-a67a3fad3f4b", 00:28:11.119 "assigned_rate_limits": { 00:28:11.119 "rw_ios_per_sec": 0, 00:28:11.119 "rw_mbytes_per_sec": 0, 00:28:11.119 "r_mbytes_per_sec": 0, 00:28:11.119 "w_mbytes_per_sec": 0 00:28:11.119 }, 00:28:11.119 "claimed": false, 00:28:11.119 "zoned": false, 00:28:11.119 "supported_io_types": { 00:28:11.119 "read": true, 00:28:11.119 "write": true, 00:28:11.119 "unmap": false, 00:28:11.119 "write_zeroes": true, 00:28:11.119 "flush": true, 00:28:11.119 "reset": true, 00:28:11.119 "compare": true, 00:28:11.119 "compare_and_write": true, 00:28:11.119 "abort": true, 00:28:11.119 "nvme_admin": true, 00:28:11.119 "nvme_io": true 00:28:11.119 }, 00:28:11.119 "memory_domains": [ 00:28:11.119 { 00:28:11.119 "dma_device_id": "system", 00:28:11.119 "dma_device_type": 1 00:28:11.119 } 00:28:11.119 ], 00:28:11.119 "driver_specific": { 00:28:11.119 "nvme": [ 00:28:11.119 { 00:28:11.119 "trid": { 00:28:11.119 "trtype": "TCP", 00:28:11.119 "adrfam": "IPv4", 00:28:11.119 "traddr": "10.0.0.2", 00:28:11.119 "trsvcid": "4421", 00:28:11.119 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:11.119 }, 00:28:11.119 "ctrlr_data": { 00:28:11.119 "cntlid": 3, 00:28:11.119 "vendor_id": "0x8086", 00:28:11.119 "model_number": "SPDK bdev Controller", 00:28:11.119 "serial_number": "00000000000000000000", 00:28:11.119 "firmware_revision": "24.05.1", 00:28:11.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.119 "oacs": { 00:28:11.119 "security": 0, 00:28:11.119 "format": 0, 00:28:11.119 "firmware": 0, 00:28:11.119 "ns_manage": 0 00:28:11.119 }, 00:28:11.119 "multi_ctrlr": true, 00:28:11.119 "ana_reporting": false 00:28:11.119 }, 00:28:11.119 "vs": { 00:28:11.119 "nvme_version": "1.3" 00:28:11.119 }, 00:28:11.119 "ns_data": { 00:28:11.119 "id": 1, 00:28:11.119 "can_share": true 00:28:11.119 } 00:28:11.119 } 00:28:11.119 ], 00:28:11.119 "mp_policy": "active_passive" 00:28:11.119 } 00:28:11.119 } 00:28:11.119 ] 00:28:11.119 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.120 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.120 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.120 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:11.378 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.yQXp69UFdS 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.379 rmmod nvme_tcp 00:28:11.379 rmmod nvme_fabrics 00:28:11.379 rmmod nvme_keyring 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1333824 ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1333824 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1333824 ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1333824 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1333824 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1333824' 00:28:11.379 killing process with pid 1333824 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1333824 00:28:11.379 [2024-07-16 00:11:45.722396] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:11.379 [2024-07-16 00:11:45.722437] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1333824 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.379 00:11:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.917 00:11:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.917 00:28:13.917 real 0m5.033s 00:28:13.917 user 0m1.929s 00:28:13.917 sys 0m1.513s 00:28:13.917 00:11:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.917 00:11:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:13.917 ************************************ 00:28:13.917 END TEST nvmf_async_init 00:28:13.917 ************************************ 00:28:13.917 00:11:47 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.917 00:11:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:13.917 00:11:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.917 00:11:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.917 ************************************ 00:28:13.917 START TEST dma 00:28:13.917 ************************************ 00:28:13.917 00:11:47 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:13.917 * Looking for test storage... 00:28:13.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.917 00:11:48 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.917 00:11:48 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.917 00:11:48 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.917 00:11:48 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.917 00:11:48 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.917 00:11:48 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.917 00:11:48 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.917 00:11:48 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:13.917 00:11:48 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.917 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.918 00:11:48 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.918 00:11:48 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:13.918 00:11:48 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:13.918 00:28:13.918 real 0m0.066s 00:28:13.918 user 0m0.038s 00:28:13.918 sys 0m0.034s 00:28:13.918 00:11:48 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:13.918 00:11:48 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:13.918 ************************************ 00:28:13.918 END TEST dma 00:28:13.918 ************************************ 00:28:13.918 00:11:48 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.918 00:11:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:13.918 00:11:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:13.918 00:11:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.918 ************************************ 00:28:13.918 START TEST nvmf_identify 00:28:13.918 ************************************ 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:13.918 * Looking for test storage... 00:28:13.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.918 00:11:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:15.296 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:15.296 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:15.296 Found net devices under 0000:08:00.0: cvl_0_0 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:15.296 Found net devices under 0000:08:00.1: cvl_0_1 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.296 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:28:15.555 00:28:15.555 --- 10.0.0.2 ping statistics --- 00:28:15.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.555 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:28:15.555 00:28:15.555 --- 10.0.0.1 ping statistics --- 00:28:15.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.555 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1335382 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1335382 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1335382 ']' 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:15.555 00:11:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.555 [2024-07-16 00:11:49.950849] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:15.555 [2024-07-16 00:11:49.950953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.555 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.555 [2024-07-16 00:11:50.021246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.814 [2024-07-16 00:11:50.113907] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.814 [2024-07-16 00:11:50.113966] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.814 [2024-07-16 00:11:50.113983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.814 [2024-07-16 00:11:50.113996] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.814 [2024-07-16 00:11:50.114008] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.814 [2024-07-16 00:11:50.114084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.814 [2024-07-16 00:11:50.114146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.814 [2024-07-16 00:11:50.114171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.814 [2024-07-16 00:11:50.114177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 [2024-07-16 00:11:50.232756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 Malloc0 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 [2024-07-16 00:11:50.311163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.814 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.074 [ 00:28:16.074 { 00:28:16.074 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:16.074 "subtype": "Discovery", 00:28:16.074 "listen_addresses": [ 00:28:16.074 { 00:28:16.074 "trtype": "TCP", 00:28:16.074 "adrfam": "IPv4", 00:28:16.074 "traddr": "10.0.0.2", 00:28:16.074 "trsvcid": "4420" 00:28:16.074 } 00:28:16.074 ], 00:28:16.074 "allow_any_host": true, 00:28:16.074 "hosts": [] 00:28:16.074 }, 00:28:16.074 { 00:28:16.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:16.074 "subtype": "NVMe", 00:28:16.074 "listen_addresses": [ 00:28:16.074 { 00:28:16.074 "trtype": "TCP", 00:28:16.074 "adrfam": "IPv4", 00:28:16.074 "traddr": "10.0.0.2", 00:28:16.074 "trsvcid": "4420" 00:28:16.074 } 00:28:16.074 ], 00:28:16.074 "allow_any_host": true, 00:28:16.074 "hosts": [], 00:28:16.074 "serial_number": "SPDK00000000000001", 00:28:16.074 "model_number": "SPDK bdev Controller", 00:28:16.074 "max_namespaces": 32, 00:28:16.074 "min_cntlid": 1, 00:28:16.074 "max_cntlid": 65519, 00:28:16.074 "namespaces": [ 00:28:16.074 { 00:28:16.074 "nsid": 1, 00:28:16.074 "bdev_name": "Malloc0", 00:28:16.074 "name": "Malloc0", 00:28:16.074 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:16.074 "eui64": "ABCDEF0123456789", 00:28:16.074 "uuid": "bdbab89b-8c5a-4563-9727-b71949111798" 00:28:16.074 } 00:28:16.074 ] 00:28:16.074 } 00:28:16.074 ] 00:28:16.074 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.074 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:16.074 [2024-07-16 00:11:50.352112] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:16.075 [2024-07-16 00:11:50.352175] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335493 ] 00:28:16.075 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.075 [2024-07-16 00:11:50.395011] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:16.075 [2024-07-16 00:11:50.395081] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:16.075 [2024-07-16 00:11:50.395091] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:16.075 [2024-07-16 00:11:50.395110] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:16.075 [2024-07-16 00:11:50.395125] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:16.075 [2024-07-16 00:11:50.395325] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:16.075 [2024-07-16 00:11:50.395379] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1043030 0 00:28:16.075 [2024-07-16 00:11:50.409150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:16.075 [2024-07-16 00:11:50.409173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:16.075 [2024-07-16 00:11:50.409182] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:16.075 [2024-07-16 00:11:50.409190] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:16.075 [2024-07-16 00:11:50.409239] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.409252] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.409261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.409279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:16.075 [2024-07-16 00:11:50.409309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.417155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.417174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.417182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.417209] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:16.075 [2024-07-16 00:11:50.417221] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:16.075 [2024-07-16 00:11:50.417231] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:16.075 [2024-07-16 00:11:50.417256] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.417286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.417312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.417446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.417459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.417467] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417475] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.417497] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:16.075 [2024-07-16 00:11:50.417512] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:16.075 [2024-07-16 00:11:50.417526] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.417553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.417576] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.417708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.417721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.417729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417736] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.417748] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:16.075 [2024-07-16 00:11:50.417763] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.417776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.417803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.417825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.417924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.417939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.417946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417954] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.417966] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.417984] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.417993] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.418012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.418034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.418165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.418179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.418187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418195] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.418207] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:16.075 [2024-07-16 00:11:50.418217] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.418236] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.418350] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:16.075 [2024-07-16 00:11:50.418359] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.418375] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.418402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.418425] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.418556] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.418569] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.418577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.418596] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:16.075 [2024-07-16 00:11:50.418614] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418623] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.418642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.418664] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.418796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.075 [2024-07-16 00:11:50.418810] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.075 [2024-07-16 00:11:50.418818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.075 [2024-07-16 00:11:50.418836] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:16.075 [2024-07-16 00:11:50.418846] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:16.075 [2024-07-16 00:11:50.418861] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:16.075 [2024-07-16 00:11:50.418876] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:16.075 [2024-07-16 00:11:50.418895] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.418904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.075 [2024-07-16 00:11:50.418917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.075 [2024-07-16 00:11:50.418939] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.075 [2024-07-16 00:11:50.419076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.075 [2024-07-16 00:11:50.419095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.075 [2024-07-16 00:11:50.419103] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.419111] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1043030): datao=0, datal=4096, cccid=0 00:28:16.075 [2024-07-16 00:11:50.419120] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x109c100) on tqpair(0x1043030): expected_datao=0, payload_size=4096 00:28:16.075 [2024-07-16 00:11:50.419130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.075 [2024-07-16 00:11:50.419156] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419167] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419191] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.419204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.419211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.419238] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:16.076 [2024-07-16 00:11:50.419250] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:16.076 [2024-07-16 00:11:50.419259] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:16.076 [2024-07-16 00:11:50.419268] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:16.076 [2024-07-16 00:11:50.419277] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:16.076 [2024-07-16 00:11:50.419286] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:16.076 [2024-07-16 00:11:50.419302] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:16.076 [2024-07-16 00:11:50.419315] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419331] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.076 [2024-07-16 00:11:50.419366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.076 [2024-07-16 00:11:50.419474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.419489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.419497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c100) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.419521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419529] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.076 [2024-07-16 00:11:50.419559] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419566] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419574] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.076 [2024-07-16 00:11:50.419599] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.076 [2024-07-16 00:11:50.419635] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419642] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419650] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.076 [2024-07-16 00:11:50.419669] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:16.076 [2024-07-16 00:11:50.419690] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:16.076 [2024-07-16 00:11:50.419703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.419723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.076 [2024-07-16 00:11:50.419747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c100, cid 0, qid 0 00:28:16.076 [2024-07-16 00:11:50.419759] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c260, cid 1, qid 0 00:28:16.076 [2024-07-16 00:11:50.419768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c3c0, cid 2, qid 0 00:28:16.076 [2024-07-16 00:11:50.419777] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.076 [2024-07-16 00:11:50.419786] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c680, cid 4, qid 0 00:28:16.076 [2024-07-16 00:11:50.419917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.419930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.419938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419946] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c680) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.419958] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:16.076 [2024-07-16 00:11:50.419968] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:16.076 [2024-07-16 00:11:50.419987] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.419996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.420008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.076 [2024-07-16 00:11:50.420032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c680, cid 4, qid 0 00:28:16.076 [2024-07-16 00:11:50.420155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.076 [2024-07-16 00:11:50.420171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.076 [2024-07-16 00:11:50.420179] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420187] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1043030): datao=0, datal=4096, cccid=4 00:28:16.076 [2024-07-16 00:11:50.420200] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x109c680) on tqpair(0x1043030): expected_datao=0, payload_size=4096 00:28:16.076 [2024-07-16 00:11:50.420209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420221] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420229] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420242] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.420253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.420260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c680) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.420289] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:16.076 [2024-07-16 00:11:50.420325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420336] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.420349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.076 [2024-07-16 00:11:50.420362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420377] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.420388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.076 [2024-07-16 00:11:50.420415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c680, cid 4, qid 0 00:28:16.076 [2024-07-16 00:11:50.420428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c7e0, cid 5, qid 0 00:28:16.076 [2024-07-16 00:11:50.420610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.076 [2024-07-16 00:11:50.420623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.076 [2024-07-16 00:11:50.420631] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420638] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1043030): datao=0, datal=1024, cccid=4 00:28:16.076 [2024-07-16 00:11:50.420647] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x109c680) on tqpair(0x1043030): expected_datao=0, payload_size=1024 00:28:16.076 [2024-07-16 00:11:50.420656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420667] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420675] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420685] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.420695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.420702] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.420710] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c7e0) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.465153] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.076 [2024-07-16 00:11:50.465173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.076 [2024-07-16 00:11:50.465180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.465188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c680) on tqpair=0x1043030 00:28:16.076 [2024-07-16 00:11:50.465214] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.076 [2024-07-16 00:11:50.465225] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1043030) 00:28:16.076 [2024-07-16 00:11:50.465241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.076 [2024-07-16 00:11:50.465274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c680, cid 4, qid 0 00:28:16.076 [2024-07-16 00:11:50.465385] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.076 [2024-07-16 00:11:50.465401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.076 [2024-07-16 00:11:50.465409] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465416] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1043030): datao=0, datal=3072, cccid=4 00:28:16.077 [2024-07-16 00:11:50.465425] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x109c680) on tqpair(0x1043030): expected_datao=0, payload_size=3072 00:28:16.077 [2024-07-16 00:11:50.465434] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465454] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465464] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465488] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.077 [2024-07-16 00:11:50.465500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.077 [2024-07-16 00:11:50.465508] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c680) on tqpair=0x1043030 00:28:16.077 [2024-07-16 00:11:50.465532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1043030) 00:28:16.077 [2024-07-16 00:11:50.465553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.077 [2024-07-16 00:11:50.465582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c680, cid 4, qid 0 00:28:16.077 [2024-07-16 00:11:50.465697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.077 [2024-07-16 00:11:50.465712] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.077 [2024-07-16 00:11:50.465720] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465727] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1043030): datao=0, datal=8, cccid=4 00:28:16.077 [2024-07-16 00:11:50.465736] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x109c680) on tqpair(0x1043030): expected_datao=0, payload_size=8 00:28:16.077 [2024-07-16 00:11:50.465745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465756] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.465764] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.506237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.077 [2024-07-16 00:11:50.506257] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.077 [2024-07-16 00:11:50.506265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.077 [2024-07-16 00:11:50.506273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c680) on tqpair=0x1043030 00:28:16.077 ===================================================== 00:28:16.077 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:16.077 ===================================================== 00:28:16.077 Controller Capabilities/Features 00:28:16.077 ================================ 00:28:16.077 Vendor ID: 0000 00:28:16.077 Subsystem Vendor ID: 0000 00:28:16.077 Serial Number: .................... 00:28:16.077 Model Number: ........................................ 00:28:16.077 Firmware Version: 24.05.1 00:28:16.077 Recommended Arb Burst: 0 00:28:16.077 IEEE OUI Identifier: 00 00 00 00:28:16.077 Multi-path I/O 00:28:16.077 May have multiple subsystem ports: No 00:28:16.077 May have multiple controllers: No 00:28:16.077 Associated with SR-IOV VF: No 00:28:16.077 Max Data Transfer Size: 131072 00:28:16.077 Max Number of Namespaces: 0 00:28:16.077 Max Number of I/O Queues: 1024 00:28:16.077 NVMe Specification Version (VS): 1.3 00:28:16.077 NVMe Specification Version (Identify): 1.3 00:28:16.077 Maximum Queue Entries: 128 00:28:16.077 Contiguous Queues Required: Yes 00:28:16.077 Arbitration Mechanisms Supported 00:28:16.077 Weighted Round Robin: Not Supported 00:28:16.077 Vendor Specific: Not Supported 00:28:16.077 Reset Timeout: 15000 ms 00:28:16.077 Doorbell Stride: 4 bytes 00:28:16.077 NVM Subsystem Reset: Not Supported 00:28:16.077 Command Sets Supported 00:28:16.077 NVM Command Set: Supported 00:28:16.077 Boot Partition: Not Supported 00:28:16.077 Memory Page Size Minimum: 4096 bytes 00:28:16.077 Memory Page Size Maximum: 4096 bytes 00:28:16.077 Persistent Memory Region: Not Supported 00:28:16.077 Optional Asynchronous Events Supported 00:28:16.077 Namespace Attribute Notices: Not Supported 00:28:16.077 Firmware Activation Notices: Not Supported 00:28:16.077 ANA Change Notices: Not Supported 00:28:16.077 PLE Aggregate Log Change Notices: Not Supported 00:28:16.077 LBA Status Info Alert Notices: Not Supported 00:28:16.077 EGE Aggregate Log Change Notices: Not Supported 00:28:16.077 Normal NVM Subsystem Shutdown event: Not Supported 00:28:16.077 Zone Descriptor Change Notices: Not Supported 00:28:16.077 Discovery Log Change Notices: Supported 00:28:16.077 Controller Attributes 00:28:16.077 128-bit Host Identifier: Not Supported 00:28:16.077 Non-Operational Permissive Mode: Not Supported 00:28:16.077 NVM Sets: Not Supported 00:28:16.077 Read Recovery Levels: Not Supported 00:28:16.077 Endurance Groups: Not Supported 00:28:16.077 Predictable Latency Mode: Not Supported 00:28:16.077 Traffic Based Keep ALive: Not Supported 00:28:16.077 Namespace Granularity: Not Supported 00:28:16.077 SQ Associations: Not Supported 00:28:16.077 UUID List: Not Supported 00:28:16.077 Multi-Domain Subsystem: Not Supported 00:28:16.077 Fixed Capacity Management: Not Supported 00:28:16.077 Variable Capacity Management: Not Supported 00:28:16.077 Delete Endurance Group: Not Supported 00:28:16.077 Delete NVM Set: Not Supported 00:28:16.077 Extended LBA Formats Supported: Not Supported 00:28:16.077 Flexible Data Placement Supported: Not Supported 00:28:16.077 00:28:16.077 Controller Memory Buffer Support 00:28:16.077 ================================ 00:28:16.077 Supported: No 00:28:16.077 00:28:16.077 Persistent Memory Region Support 00:28:16.077 ================================ 00:28:16.077 Supported: No 00:28:16.077 00:28:16.077 Admin Command Set Attributes 00:28:16.077 ============================ 00:28:16.077 Security Send/Receive: Not Supported 00:28:16.077 Format NVM: Not Supported 00:28:16.077 Firmware Activate/Download: Not Supported 00:28:16.077 Namespace Management: Not Supported 00:28:16.077 Device Self-Test: Not Supported 00:28:16.077 Directives: Not Supported 00:28:16.077 NVMe-MI: Not Supported 00:28:16.077 Virtualization Management: Not Supported 00:28:16.077 Doorbell Buffer Config: Not Supported 00:28:16.077 Get LBA Status Capability: Not Supported 00:28:16.077 Command & Feature Lockdown Capability: Not Supported 00:28:16.077 Abort Command Limit: 1 00:28:16.077 Async Event Request Limit: 4 00:28:16.077 Number of Firmware Slots: N/A 00:28:16.077 Firmware Slot 1 Read-Only: N/A 00:28:16.077 Firmware Activation Without Reset: N/A 00:28:16.077 Multiple Update Detection Support: N/A 00:28:16.077 Firmware Update Granularity: No Information Provided 00:28:16.077 Per-Namespace SMART Log: No 00:28:16.077 Asymmetric Namespace Access Log Page: Not Supported 00:28:16.077 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:16.077 Command Effects Log Page: Not Supported 00:28:16.077 Get Log Page Extended Data: Supported 00:28:16.077 Telemetry Log Pages: Not Supported 00:28:16.077 Persistent Event Log Pages: Not Supported 00:28:16.077 Supported Log Pages Log Page: May Support 00:28:16.077 Commands Supported & Effects Log Page: Not Supported 00:28:16.077 Feature Identifiers & Effects Log Page:May Support 00:28:16.077 NVMe-MI Commands & Effects Log Page: May Support 00:28:16.077 Data Area 4 for Telemetry Log: Not Supported 00:28:16.077 Error Log Page Entries Supported: 128 00:28:16.077 Keep Alive: Not Supported 00:28:16.077 00:28:16.077 NVM Command Set Attributes 00:28:16.077 ========================== 00:28:16.077 Submission Queue Entry Size 00:28:16.077 Max: 1 00:28:16.077 Min: 1 00:28:16.077 Completion Queue Entry Size 00:28:16.077 Max: 1 00:28:16.077 Min: 1 00:28:16.077 Number of Namespaces: 0 00:28:16.077 Compare Command: Not Supported 00:28:16.077 Write Uncorrectable Command: Not Supported 00:28:16.077 Dataset Management Command: Not Supported 00:28:16.077 Write Zeroes Command: Not Supported 00:28:16.077 Set Features Save Field: Not Supported 00:28:16.077 Reservations: Not Supported 00:28:16.077 Timestamp: Not Supported 00:28:16.077 Copy: Not Supported 00:28:16.077 Volatile Write Cache: Not Present 00:28:16.077 Atomic Write Unit (Normal): 1 00:28:16.077 Atomic Write Unit (PFail): 1 00:28:16.077 Atomic Compare & Write Unit: 1 00:28:16.077 Fused Compare & Write: Supported 00:28:16.077 Scatter-Gather List 00:28:16.077 SGL Command Set: Supported 00:28:16.077 SGL Keyed: Supported 00:28:16.077 SGL Bit Bucket Descriptor: Not Supported 00:28:16.077 SGL Metadata Pointer: Not Supported 00:28:16.077 Oversized SGL: Not Supported 00:28:16.077 SGL Metadata Address: Not Supported 00:28:16.077 SGL Offset: Supported 00:28:16.077 Transport SGL Data Block: Not Supported 00:28:16.077 Replay Protected Memory Block: Not Supported 00:28:16.077 00:28:16.077 Firmware Slot Information 00:28:16.077 ========================= 00:28:16.077 Active slot: 0 00:28:16.077 00:28:16.077 00:28:16.077 Error Log 00:28:16.077 ========= 00:28:16.077 00:28:16.077 Active Namespaces 00:28:16.077 ================= 00:28:16.077 Discovery Log Page 00:28:16.077 ================== 00:28:16.077 Generation Counter: 2 00:28:16.077 Number of Records: 2 00:28:16.077 Record Format: 0 00:28:16.077 00:28:16.077 Discovery Log Entry 0 00:28:16.077 ---------------------- 00:28:16.077 Transport Type: 3 (TCP) 00:28:16.077 Address Family: 1 (IPv4) 00:28:16.077 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:16.077 Entry Flags: 00:28:16.077 Duplicate Returned Information: 1 00:28:16.078 Explicit Persistent Connection Support for Discovery: 1 00:28:16.078 Transport Requirements: 00:28:16.078 Secure Channel: Not Required 00:28:16.078 Port ID: 0 (0x0000) 00:28:16.078 Controller ID: 65535 (0xffff) 00:28:16.078 Admin Max SQ Size: 128 00:28:16.078 Transport Service Identifier: 4420 00:28:16.078 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:16.078 Transport Address: 10.0.0.2 00:28:16.078 Discovery Log Entry 1 00:28:16.078 ---------------------- 00:28:16.078 Transport Type: 3 (TCP) 00:28:16.078 Address Family: 1 (IPv4) 00:28:16.078 Subsystem Type: 2 (NVM Subsystem) 00:28:16.078 Entry Flags: 00:28:16.078 Duplicate Returned Information: 0 00:28:16.078 Explicit Persistent Connection Support for Discovery: 0 00:28:16.078 Transport Requirements: 00:28:16.078 Secure Channel: Not Required 00:28:16.078 Port ID: 0 (0x0000) 00:28:16.078 Controller ID: 65535 (0xffff) 00:28:16.078 Admin Max SQ Size: 128 00:28:16.078 Transport Service Identifier: 4420 00:28:16.078 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:16.078 Transport Address: 10.0.0.2 [2024-07-16 00:11:50.506399] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:16.078 [2024-07-16 00:11:50.506426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.078 [2024-07-16 00:11:50.506439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.078 [2024-07-16 00:11:50.506450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.078 [2024-07-16 00:11:50.506462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.078 [2024-07-16 00:11:50.506485] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.506516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.506541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.506676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.506691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.506699] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506707] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.506723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506739] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.506751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.506779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.506934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.506948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.506955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.506963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.506976] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:16.078 [2024-07-16 00:11:50.506985] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:16.078 [2024-07-16 00:11:50.507002] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.507031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.507053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.507155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.507171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.507179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.507207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507224] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.507236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.507258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.507392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.507409] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.507417] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.507444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507454] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.507473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.507495] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.507625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.507638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.507646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.507672] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.507701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.507722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.507853] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.507866] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.507874] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507881] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.507900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.507917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.507928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.507950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.508051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.508066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.508074] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.508081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.508100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.508110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.508118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1043030) 00:28:16.078 [2024-07-16 00:11:50.508129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.078 [2024-07-16 00:11:50.512163] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x109c520, cid 3, qid 0 00:28:16.078 [2024-07-16 00:11:50.512300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.078 [2024-07-16 00:11:50.512314] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.078 [2024-07-16 00:11:50.512326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.078 [2024-07-16 00:11:50.512335] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x109c520) on tqpair=0x1043030 00:28:16.078 [2024-07-16 00:11:50.512351] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:28:16.078 00:28:16.078 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:16.078 [2024-07-16 00:11:50.546336] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:16.078 [2024-07-16 00:11:50.546380] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335495 ] 00:28:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.341 [2024-07-16 00:11:50.588955] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:16.341 [2024-07-16 00:11:50.589015] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:16.341 [2024-07-16 00:11:50.589026] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:16.341 [2024-07-16 00:11:50.589049] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:16.341 [2024-07-16 00:11:50.589066] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:16.341 [2024-07-16 00:11:50.589291] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:16.341 [2024-07-16 00:11:50.589350] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1825030 0 00:28:16.341 [2024-07-16 00:11:50.603164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:16.341 [2024-07-16 00:11:50.603187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:16.341 [2024-07-16 00:11:50.603197] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:16.341 [2024-07-16 00:11:50.603205] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:16.341 [2024-07-16 00:11:50.603253] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.603266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.603274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.341 [2024-07-16 00:11:50.603292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:16.341 [2024-07-16 00:11:50.603321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.341 [2024-07-16 00:11:50.610154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.341 [2024-07-16 00:11:50.610174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.341 [2024-07-16 00:11:50.610182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.341 [2024-07-16 00:11:50.610209] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:16.341 [2024-07-16 00:11:50.610221] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:16.341 [2024-07-16 00:11:50.610232] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:16.341 [2024-07-16 00:11:50.610258] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610272] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.341 [2024-07-16 00:11:50.610293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.341 [2024-07-16 00:11:50.610320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.341 [2024-07-16 00:11:50.610426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.341 [2024-07-16 00:11:50.610443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.341 [2024-07-16 00:11:50.610451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.341 [2024-07-16 00:11:50.610475] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:16.341 [2024-07-16 00:11:50.610491] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:16.341 [2024-07-16 00:11:50.610505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610514] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.341 [2024-07-16 00:11:50.610534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.341 [2024-07-16 00:11:50.610557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.341 [2024-07-16 00:11:50.610656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.341 [2024-07-16 00:11:50.610672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.341 [2024-07-16 00:11:50.610680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610688] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.341 [2024-07-16 00:11:50.610702] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:16.341 [2024-07-16 00:11:50.610717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:16.341 [2024-07-16 00:11:50.610731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.341 [2024-07-16 00:11:50.610759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.341 [2024-07-16 00:11:50.610782] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.341 [2024-07-16 00:11:50.610883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.341 [2024-07-16 00:11:50.610899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.341 [2024-07-16 00:11:50.610906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.341 [2024-07-16 00:11:50.610925] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:16.341 [2024-07-16 00:11:50.610944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.341 [2024-07-16 00:11:50.610962] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.610974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.342 [2024-07-16 00:11:50.611003] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.611106] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.611121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.611129] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.611156] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:16.342 [2024-07-16 00:11:50.611166] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:16.342 [2024-07-16 00:11:50.611181] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:16.342 [2024-07-16 00:11:50.611293] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:16.342 [2024-07-16 00:11:50.611301] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:16.342 [2024-07-16 00:11:50.611317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611325] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.611345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.342 [2024-07-16 00:11:50.611370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.611473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.611489] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.611497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611505] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.611516] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:16.342 [2024-07-16 00:11:50.611535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.611564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.342 [2024-07-16 00:11:50.611587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.611682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.611698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.611706] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611713] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.611725] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:16.342 [2024-07-16 00:11:50.611735] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.611749] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:16.342 [2024-07-16 00:11:50.611769] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.611794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.611803] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.611816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.342 [2024-07-16 00:11:50.611839] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.611975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.342 [2024-07-16 00:11:50.611990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.342 [2024-07-16 00:11:50.612006] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612019] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=4096, cccid=0 00:28:16.342 [2024-07-16 00:11:50.612030] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e100) on tqpair(0x1825030): expected_datao=0, payload_size=4096 00:28:16.342 [2024-07-16 00:11:50.612039] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612060] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612070] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612084] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.612095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.612103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.612129] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:16.342 [2024-07-16 00:11:50.612149] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:16.342 [2024-07-16 00:11:50.612159] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:16.342 [2024-07-16 00:11:50.612167] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:16.342 [2024-07-16 00:11:50.612177] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:16.342 [2024-07-16 00:11:50.612186] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612220] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612235] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612251] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.342 [2024-07-16 00:11:50.612288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.612391] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.612407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.612414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612422] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e100) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.612438] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612452] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612460] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.342 [2024-07-16 00:11:50.612483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612491] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.342 [2024-07-16 00:11:50.612520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612528] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.342 [2024-07-16 00:11:50.612556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.342 [2024-07-16 00:11:50.612592] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612612] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.342 [2024-07-16 00:11:50.612645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.342 [2024-07-16 00:11:50.612672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e100, cid 0, qid 0 00:28:16.342 [2024-07-16 00:11:50.612690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e260, cid 1, qid 0 00:28:16.342 [2024-07-16 00:11:50.612703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e3c0, cid 2, qid 0 00:28:16.342 [2024-07-16 00:11:50.612713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.342 [2024-07-16 00:11:50.612722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.342 [2024-07-16 00:11:50.612839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.342 [2024-07-16 00:11:50.612852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.342 [2024-07-16 00:11:50.612860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612867] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.342 [2024-07-16 00:11:50.612879] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:16.342 [2024-07-16 00:11:50.612890] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612905] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612917] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:16.342 [2024-07-16 00:11:50.612933] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612942] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.342 [2024-07-16 00:11:50.612950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.612962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:16.343 [2024-07-16 00:11:50.612985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.343 [2024-07-16 00:11:50.613085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.613100] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.613108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.613116] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.613202] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.613226] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.613241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.613250] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.613262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.613286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.343 [2024-07-16 00:11:50.613400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.343 [2024-07-16 00:11:50.613423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.343 [2024-07-16 00:11:50.613432] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.613440] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=4096, cccid=4 00:28:16.343 [2024-07-16 00:11:50.613449] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e680) on tqpair(0x1825030): expected_datao=0, payload_size=4096 00:28:16.343 [2024-07-16 00:11:50.613457] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.613477] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.613487] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.658174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.658182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.658210] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:16.343 [2024-07-16 00:11:50.658235] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.658256] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.658270] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658279] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.658292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.658322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.343 [2024-07-16 00:11:50.658440] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.343 [2024-07-16 00:11:50.658463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.343 [2024-07-16 00:11:50.658472] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658479] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=4096, cccid=4 00:28:16.343 [2024-07-16 00:11:50.658488] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e680) on tqpair(0x1825030): expected_datao=0, payload_size=4096 00:28:16.343 [2024-07-16 00:11:50.658497] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658517] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.658527] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.699265] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.699274] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.699307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699329] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699344] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699353] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.699366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.699392] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.343 [2024-07-16 00:11:50.699494] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.343 [2024-07-16 00:11:50.699511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.343 [2024-07-16 00:11:50.699519] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699526] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=4096, cccid=4 00:28:16.343 [2024-07-16 00:11:50.699535] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e680) on tqpair(0x1825030): expected_datao=0, payload_size=4096 00:28:16.343 [2024-07-16 00:11:50.699544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699577] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699589] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.699613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.699621] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.699645] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699662] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699681] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699697] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699708] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699718] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:16.343 [2024-07-16 00:11:50.699727] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:16.343 [2024-07-16 00:11:50.699737] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:16.343 [2024-07-16 00:11:50.699763] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699773] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.699786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.699799] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699808] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.699815] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.699826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.343 [2024-07-16 00:11:50.699853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.343 [2024-07-16 00:11:50.699872] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e7e0, cid 5, qid 0 00:28:16.343 [2024-07-16 00:11:50.699976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.699991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.699999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700007] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.700020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.700031] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.700038] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e7e0) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.700065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700074] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.700086] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.700109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e7e0, cid 5, qid 0 00:28:16.343 [2024-07-16 00:11:50.700212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.700228] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.700236] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e7e0) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.700263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.700285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.700312] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e7e0, cid 5, qid 0 00:28:16.343 [2024-07-16 00:11:50.700415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.700430] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.343 [2024-07-16 00:11:50.700438] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e7e0) on tqpair=0x1825030 00:28:16.343 [2024-07-16 00:11:50.700465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.343 [2024-07-16 00:11:50.700474] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1825030) 00:28:16.343 [2024-07-16 00:11:50.700487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.343 [2024-07-16 00:11:50.700509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e7e0, cid 5, qid 0 00:28:16.343 [2024-07-16 00:11:50.700599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.343 [2024-07-16 00:11:50.700615] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.344 [2024-07-16 00:11:50.700622] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.700630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e7e0) on tqpair=0x1825030 00:28:16.344 [2024-07-16 00:11:50.700653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.700664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1825030) 00:28:16.344 [2024-07-16 00:11:50.700676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.344 [2024-07-16 00:11:50.700689] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.700697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1825030) 00:28:16.344 [2024-07-16 00:11:50.700708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.344 [2024-07-16 00:11:50.700721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.700729] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1825030) 00:28:16.344 [2024-07-16 00:11:50.700740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.344 [2024-07-16 00:11:50.700754] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.700762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1825030) 00:28:16.344 [2024-07-16 00:11:50.700773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.344 [2024-07-16 00:11:50.700798] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e7e0, cid 5, qid 0 00:28:16.344 [2024-07-16 00:11:50.700816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e680, cid 4, qid 0 00:28:16.344 [2024-07-16 00:11:50.700829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e940, cid 6, qid 0 00:28:16.344 [2024-07-16 00:11:50.700838] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187eaa0, cid 7, qid 0 00:28:16.344 [2024-07-16 00:11:50.701016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.344 [2024-07-16 00:11:50.701043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.344 [2024-07-16 00:11:50.701059] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701067] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=8192, cccid=5 00:28:16.344 [2024-07-16 00:11:50.701076] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e7e0) on tqpair(0x1825030): expected_datao=0, payload_size=8192 00:28:16.344 [2024-07-16 00:11:50.701090] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701116] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701127] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701144] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.344 [2024-07-16 00:11:50.701157] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.344 [2024-07-16 00:11:50.701164] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701172] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=512, cccid=4 00:28:16.344 [2024-07-16 00:11:50.701180] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e680) on tqpair(0x1825030): expected_datao=0, payload_size=512 00:28:16.344 [2024-07-16 00:11:50.701189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701200] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701208] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701217] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.344 [2024-07-16 00:11:50.701227] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.344 [2024-07-16 00:11:50.701235] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701242] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=512, cccid=6 00:28:16.344 [2024-07-16 00:11:50.701251] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187e940) on tqpair(0x1825030): expected_datao=0, payload_size=512 00:28:16.344 [2024-07-16 00:11:50.701259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701270] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701278] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701287] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:16.344 [2024-07-16 00:11:50.701297] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:16.344 [2024-07-16 00:11:50.701305] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701312] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1825030): datao=0, datal=4096, cccid=7 00:28:16.344 [2024-07-16 00:11:50.701321] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x187eaa0) on tqpair(0x1825030): expected_datao=0, payload_size=4096 00:28:16.344 [2024-07-16 00:11:50.701329] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701340] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.701348] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.745164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.344 [2024-07-16 00:11:50.745187] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.344 [2024-07-16 00:11:50.745200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.745208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e7e0) on tqpair=0x1825030 00:28:16.344 [2024-07-16 00:11:50.745233] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.344 [2024-07-16 00:11:50.745245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.344 [2024-07-16 00:11:50.745253] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.745261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e680) on tqpair=0x1825030 00:28:16.344 [2024-07-16 00:11:50.745278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.344 [2024-07-16 00:11:50.745290] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.344 [2024-07-16 00:11:50.745297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.745308] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e940) on tqpair=0x1825030 00:28:16.344 [2024-07-16 00:11:50.745327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.344 [2024-07-16 00:11:50.745338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.344 [2024-07-16 00:11:50.745346] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.344 [2024-07-16 00:11:50.745353] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187eaa0) on tqpair=0x1825030 00:28:16.344 ===================================================== 00:28:16.344 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.344 ===================================================== 00:28:16.344 Controller Capabilities/Features 00:28:16.344 ================================ 00:28:16.344 Vendor ID: 8086 00:28:16.344 Subsystem Vendor ID: 8086 00:28:16.344 Serial Number: SPDK00000000000001 00:28:16.344 Model Number: SPDK bdev Controller 00:28:16.344 Firmware Version: 24.05.1 00:28:16.344 Recommended Arb Burst: 6 00:28:16.344 IEEE OUI Identifier: e4 d2 5c 00:28:16.344 Multi-path I/O 00:28:16.344 May have multiple subsystem ports: Yes 00:28:16.344 May have multiple controllers: Yes 00:28:16.344 Associated with SR-IOV VF: No 00:28:16.344 Max Data Transfer Size: 131072 00:28:16.344 Max Number of Namespaces: 32 00:28:16.344 Max Number of I/O Queues: 127 00:28:16.344 NVMe Specification Version (VS): 1.3 00:28:16.344 NVMe Specification Version (Identify): 1.3 00:28:16.344 Maximum Queue Entries: 128 00:28:16.344 Contiguous Queues Required: Yes 00:28:16.344 Arbitration Mechanisms Supported 00:28:16.344 Weighted Round Robin: Not Supported 00:28:16.344 Vendor Specific: Not Supported 00:28:16.344 Reset Timeout: 15000 ms 00:28:16.344 Doorbell Stride: 4 bytes 00:28:16.344 NVM Subsystem Reset: Not Supported 00:28:16.344 Command Sets Supported 00:28:16.344 NVM Command Set: Supported 00:28:16.344 Boot Partition: Not Supported 00:28:16.344 Memory Page Size Minimum: 4096 bytes 00:28:16.344 Memory Page Size Maximum: 4096 bytes 00:28:16.344 Persistent Memory Region: Not Supported 00:28:16.344 Optional Asynchronous Events Supported 00:28:16.344 Namespace Attribute Notices: Supported 00:28:16.344 Firmware Activation Notices: Not Supported 00:28:16.344 ANA Change Notices: Not Supported 00:28:16.344 PLE Aggregate Log Change Notices: Not Supported 00:28:16.344 LBA Status Info Alert Notices: Not Supported 00:28:16.344 EGE Aggregate Log Change Notices: Not Supported 00:28:16.344 Normal NVM Subsystem Shutdown event: Not Supported 00:28:16.344 Zone Descriptor Change Notices: Not Supported 00:28:16.344 Discovery Log Change Notices: Not Supported 00:28:16.344 Controller Attributes 00:28:16.344 128-bit Host Identifier: Supported 00:28:16.344 Non-Operational Permissive Mode: Not Supported 00:28:16.344 NVM Sets: Not Supported 00:28:16.344 Read Recovery Levels: Not Supported 00:28:16.344 Endurance Groups: Not Supported 00:28:16.344 Predictable Latency Mode: Not Supported 00:28:16.344 Traffic Based Keep ALive: Not Supported 00:28:16.344 Namespace Granularity: Not Supported 00:28:16.344 SQ Associations: Not Supported 00:28:16.344 UUID List: Not Supported 00:28:16.344 Multi-Domain Subsystem: Not Supported 00:28:16.344 Fixed Capacity Management: Not Supported 00:28:16.344 Variable Capacity Management: Not Supported 00:28:16.344 Delete Endurance Group: Not Supported 00:28:16.344 Delete NVM Set: Not Supported 00:28:16.344 Extended LBA Formats Supported: Not Supported 00:28:16.344 Flexible Data Placement Supported: Not Supported 00:28:16.344 00:28:16.344 Controller Memory Buffer Support 00:28:16.344 ================================ 00:28:16.344 Supported: No 00:28:16.344 00:28:16.344 Persistent Memory Region Support 00:28:16.344 ================================ 00:28:16.344 Supported: No 00:28:16.344 00:28:16.344 Admin Command Set Attributes 00:28:16.344 ============================ 00:28:16.344 Security Send/Receive: Not Supported 00:28:16.344 Format NVM: Not Supported 00:28:16.344 Firmware Activate/Download: Not Supported 00:28:16.344 Namespace Management: Not Supported 00:28:16.344 Device Self-Test: Not Supported 00:28:16.344 Directives: Not Supported 00:28:16.344 NVMe-MI: Not Supported 00:28:16.344 Virtualization Management: Not Supported 00:28:16.344 Doorbell Buffer Config: Not Supported 00:28:16.344 Get LBA Status Capability: Not Supported 00:28:16.344 Command & Feature Lockdown Capability: Not Supported 00:28:16.344 Abort Command Limit: 4 00:28:16.345 Async Event Request Limit: 4 00:28:16.345 Number of Firmware Slots: N/A 00:28:16.345 Firmware Slot 1 Read-Only: N/A 00:28:16.345 Firmware Activation Without Reset: N/A 00:28:16.345 Multiple Update Detection Support: N/A 00:28:16.345 Firmware Update Granularity: No Information Provided 00:28:16.345 Per-Namespace SMART Log: No 00:28:16.345 Asymmetric Namespace Access Log Page: Not Supported 00:28:16.345 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:16.345 Command Effects Log Page: Supported 00:28:16.345 Get Log Page Extended Data: Supported 00:28:16.345 Telemetry Log Pages: Not Supported 00:28:16.345 Persistent Event Log Pages: Not Supported 00:28:16.345 Supported Log Pages Log Page: May Support 00:28:16.345 Commands Supported & Effects Log Page: Not Supported 00:28:16.345 Feature Identifiers & Effects Log Page:May Support 00:28:16.345 NVMe-MI Commands & Effects Log Page: May Support 00:28:16.345 Data Area 4 for Telemetry Log: Not Supported 00:28:16.345 Error Log Page Entries Supported: 128 00:28:16.345 Keep Alive: Supported 00:28:16.345 Keep Alive Granularity: 10000 ms 00:28:16.345 00:28:16.345 NVM Command Set Attributes 00:28:16.345 ========================== 00:28:16.345 Submission Queue Entry Size 00:28:16.345 Max: 64 00:28:16.345 Min: 64 00:28:16.345 Completion Queue Entry Size 00:28:16.345 Max: 16 00:28:16.345 Min: 16 00:28:16.345 Number of Namespaces: 32 00:28:16.345 Compare Command: Supported 00:28:16.345 Write Uncorrectable Command: Not Supported 00:28:16.345 Dataset Management Command: Supported 00:28:16.345 Write Zeroes Command: Supported 00:28:16.345 Set Features Save Field: Not Supported 00:28:16.345 Reservations: Supported 00:28:16.345 Timestamp: Not Supported 00:28:16.345 Copy: Supported 00:28:16.345 Volatile Write Cache: Present 00:28:16.345 Atomic Write Unit (Normal): 1 00:28:16.345 Atomic Write Unit (PFail): 1 00:28:16.345 Atomic Compare & Write Unit: 1 00:28:16.345 Fused Compare & Write: Supported 00:28:16.345 Scatter-Gather List 00:28:16.345 SGL Command Set: Supported 00:28:16.345 SGL Keyed: Supported 00:28:16.345 SGL Bit Bucket Descriptor: Not Supported 00:28:16.345 SGL Metadata Pointer: Not Supported 00:28:16.345 Oversized SGL: Not Supported 00:28:16.345 SGL Metadata Address: Not Supported 00:28:16.345 SGL Offset: Supported 00:28:16.345 Transport SGL Data Block: Not Supported 00:28:16.345 Replay Protected Memory Block: Not Supported 00:28:16.345 00:28:16.345 Firmware Slot Information 00:28:16.345 ========================= 00:28:16.345 Active slot: 1 00:28:16.345 Slot 1 Firmware Revision: 24.05.1 00:28:16.345 00:28:16.345 00:28:16.345 Commands Supported and Effects 00:28:16.345 ============================== 00:28:16.345 Admin Commands 00:28:16.345 -------------- 00:28:16.345 Get Log Page (02h): Supported 00:28:16.345 Identify (06h): Supported 00:28:16.345 Abort (08h): Supported 00:28:16.345 Set Features (09h): Supported 00:28:16.345 Get Features (0Ah): Supported 00:28:16.345 Asynchronous Event Request (0Ch): Supported 00:28:16.345 Keep Alive (18h): Supported 00:28:16.345 I/O Commands 00:28:16.345 ------------ 00:28:16.345 Flush (00h): Supported LBA-Change 00:28:16.345 Write (01h): Supported LBA-Change 00:28:16.345 Read (02h): Supported 00:28:16.345 Compare (05h): Supported 00:28:16.345 Write Zeroes (08h): Supported LBA-Change 00:28:16.345 Dataset Management (09h): Supported LBA-Change 00:28:16.345 Copy (19h): Supported LBA-Change 00:28:16.345 Unknown (79h): Supported LBA-Change 00:28:16.345 Unknown (7Ah): Supported 00:28:16.345 00:28:16.345 Error Log 00:28:16.345 ========= 00:28:16.345 00:28:16.345 Arbitration 00:28:16.345 =========== 00:28:16.345 Arbitration Burst: 1 00:28:16.345 00:28:16.345 Power Management 00:28:16.345 ================ 00:28:16.345 Number of Power States: 1 00:28:16.345 Current Power State: Power State #0 00:28:16.345 Power State #0: 00:28:16.345 Max Power: 0.00 W 00:28:16.345 Non-Operational State: Operational 00:28:16.345 Entry Latency: Not Reported 00:28:16.345 Exit Latency: Not Reported 00:28:16.345 Relative Read Throughput: 0 00:28:16.345 Relative Read Latency: 0 00:28:16.345 Relative Write Throughput: 0 00:28:16.345 Relative Write Latency: 0 00:28:16.345 Idle Power: Not Reported 00:28:16.345 Active Power: Not Reported 00:28:16.345 Non-Operational Permissive Mode: Not Supported 00:28:16.345 00:28:16.345 Health Information 00:28:16.345 ================== 00:28:16.345 Critical Warnings: 00:28:16.345 Available Spare Space: OK 00:28:16.345 Temperature: OK 00:28:16.345 Device Reliability: OK 00:28:16.345 Read Only: No 00:28:16.345 Volatile Memory Backup: OK 00:28:16.345 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:16.345 Temperature Threshold: [2024-07-16 00:11:50.745500] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.345 [2024-07-16 00:11:50.745513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1825030) 00:28:16.345 [2024-07-16 00:11:50.745526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.345 [2024-07-16 00:11:50.745551] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187eaa0, cid 7, qid 0 00:28:16.345 [2024-07-16 00:11:50.745638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.345 [2024-07-16 00:11:50.745653] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.345 [2024-07-16 00:11:50.745661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.345 [2024-07-16 00:11:50.745669] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187eaa0) on tqpair=0x1825030 00:28:16.345 [2024-07-16 00:11:50.745713] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:16.345 [2024-07-16 00:11:50.745737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.345 [2024-07-16 00:11:50.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.345 [2024-07-16 00:11:50.745761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.345 [2024-07-16 00:11:50.745773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.345 [2024-07-16 00:11:50.745787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.345 [2024-07-16 00:11:50.745796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.345 [2024-07-16 00:11:50.745804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.345 [2024-07-16 00:11:50.745816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.345 [2024-07-16 00:11:50.745841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.346 [2024-07-16 00:11:50.745936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.346 [2024-07-16 00:11:50.745952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.346 [2024-07-16 00:11:50.745959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.745967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.346 [2024-07-16 00:11:50.745983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.745992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.346 [2024-07-16 00:11:50.746012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.346 [2024-07-16 00:11:50.746040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.346 [2024-07-16 00:11:50.746158] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.346 [2024-07-16 00:11:50.746175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.346 [2024-07-16 00:11:50.746183] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.346 [2024-07-16 00:11:50.746206] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:16.346 [2024-07-16 00:11:50.746216] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:16.346 [2024-07-16 00:11:50.746234] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746252] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.346 [2024-07-16 00:11:50.746264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.346 [2024-07-16 00:11:50.746287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.346 [2024-07-16 00:11:50.746383] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.346 [2024-07-16 00:11:50.746398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.346 [2024-07-16 00:11:50.746406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.346 [2024-07-16 00:11:50.746434] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.346 [2024-07-16 00:11:50.746452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.346 [2024-07-16 00:11:50.746464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.346 [2024-07-16 00:11:50.746486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.746577] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.746592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.746600] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.746627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.746657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.746679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.746771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.746787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.746795] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.746823] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.746853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.746875] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.746963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.746983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.746991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.746999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747019] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747029] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747037] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.747049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.747071] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.747165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.747181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.747189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.747247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.747270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.747361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.747376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.747383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747391] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747421] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.747441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.747463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.747550] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.747565] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.747573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747600] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747610] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.747630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.747652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.747736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.747751] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.747764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747792] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747810] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.747821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.747843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.747943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.747957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.747965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.747973] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.747993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748003] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.748022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.748045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.748136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.748159] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.748167] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748175] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.748195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748213] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.748226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.748248] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.748341] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.748356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.748364] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.748392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.748422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.748444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.748535] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.748550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.748558] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748570] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.748591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748601] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748609] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.748621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.748643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.748733] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.748749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.748756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.748784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.347 [2024-07-16 00:11:50.748814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.347 [2024-07-16 00:11:50.748836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.347 [2024-07-16 00:11:50.748924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.347 [2024-07-16 00:11:50.748939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.347 [2024-07-16 00:11:50.748947] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748955] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.347 [2024-07-16 00:11:50.748974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.347 [2024-07-16 00:11:50.748984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.348 [2024-07-16 00:11:50.748992] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.348 [2024-07-16 00:11:50.749004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.348 [2024-07-16 00:11:50.749026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.348 [2024-07-16 00:11:50.749120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.348 [2024-07-16 00:11:50.749136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.348 [2024-07-16 00:11:50.753164] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.348 [2024-07-16 00:11:50.753173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.348 [2024-07-16 00:11:50.753195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:16.348 [2024-07-16 00:11:50.753206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:16.348 [2024-07-16 00:11:50.753213] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1825030) 00:28:16.348 [2024-07-16 00:11:50.753226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.348 [2024-07-16 00:11:50.753249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x187e520, cid 3, qid 0 00:28:16.348 [2024-07-16 00:11:50.753348] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:16.348 [2024-07-16 00:11:50.753363] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:16.348 [2024-07-16 00:11:50.753371] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:16.348 [2024-07-16 00:11:50.753378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x187e520) on tqpair=0x1825030 00:28:16.348 [2024-07-16 00:11:50.753399] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:28:16.348 0 Kelvin (-273 Celsius) 00:28:16.348 Available Spare: 0% 00:28:16.348 Available Spare Threshold: 0% 00:28:16.348 Life Percentage Used: 0% 00:28:16.348 Data Units Read: 0 00:28:16.348 Data Units Written: 0 00:28:16.348 Host Read Commands: 0 00:28:16.348 Host Write Commands: 0 00:28:16.348 Controller Busy Time: 0 minutes 00:28:16.348 Power Cycles: 0 00:28:16.348 Power On Hours: 0 hours 00:28:16.348 Unsafe Shutdowns: 0 00:28:16.348 Unrecoverable Media Errors: 0 00:28:16.348 Lifetime Error Log Entries: 0 00:28:16.348 Warning Temperature Time: 0 minutes 00:28:16.348 Critical Temperature Time: 0 minutes 00:28:16.348 00:28:16.348 Number of Queues 00:28:16.348 ================ 00:28:16.348 Number of I/O Submission Queues: 127 00:28:16.348 Number of I/O Completion Queues: 127 00:28:16.348 00:28:16.348 Active Namespaces 00:28:16.348 ================= 00:28:16.348 Namespace ID:1 00:28:16.348 Error Recovery Timeout: Unlimited 00:28:16.348 Command Set Identifier: NVM (00h) 00:28:16.348 Deallocate: Supported 00:28:16.348 Deallocated/Unwritten Error: Not Supported 00:28:16.348 Deallocated Read Value: Unknown 00:28:16.348 Deallocate in Write Zeroes: Not Supported 00:28:16.348 Deallocated Guard Field: 0xFFFF 00:28:16.348 Flush: Supported 00:28:16.348 Reservation: Supported 00:28:16.348 Namespace Sharing Capabilities: Multiple Controllers 00:28:16.348 Size (in LBAs): 131072 (0GiB) 00:28:16.348 Capacity (in LBAs): 131072 (0GiB) 00:28:16.348 Utilization (in LBAs): 131072 (0GiB) 00:28:16.348 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:16.348 EUI64: ABCDEF0123456789 00:28:16.348 UUID: bdbab89b-8c5a-4563-9727-b71949111798 00:28:16.348 Thin Provisioning: Not Supported 00:28:16.348 Per-NS Atomic Units: Yes 00:28:16.348 Atomic Boundary Size (Normal): 0 00:28:16.348 Atomic Boundary Size (PFail): 0 00:28:16.348 Atomic Boundary Offset: 0 00:28:16.348 Maximum Single Source Range Length: 65535 00:28:16.348 Maximum Copy Length: 65535 00:28:16.348 Maximum Source Range Count: 1 00:28:16.348 NGUID/EUI64 Never Reused: No 00:28:16.348 Namespace Write Protected: No 00:28:16.348 Number of LBA Formats: 1 00:28:16.348 Current LBA Format: LBA Format #00 00:28:16.348 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:16.348 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:16.348 rmmod nvme_tcp 00:28:16.348 rmmod nvme_fabrics 00:28:16.348 rmmod nvme_keyring 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1335382 ']' 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1335382 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1335382 ']' 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1335382 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:16.348 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1335382 00:28:16.607 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:16.607 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:16.607 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1335382' 00:28:16.607 killing process with pid 1335382 00:28:16.607 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1335382 00:28:16.607 00:11:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1335382 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:16.607 00:11:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.145 00:11:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.145 00:28:19.145 real 0m5.048s 00:28:19.145 user 0m4.372s 00:28:19.145 sys 0m1.616s 00:28:19.145 00:11:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:19.145 00:11:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:19.145 ************************************ 00:28:19.145 END TEST nvmf_identify 00:28:19.145 ************************************ 00:28:19.145 00:11:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.145 00:11:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.145 00:11:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.145 00:11:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.145 ************************************ 00:28:19.145 START TEST nvmf_perf 00:28:19.145 ************************************ 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:19.145 * Looking for test storage... 00:28:19.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.145 00:11:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.522 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:20.523 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:20.523 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:20.523 Found net devices under 0000:08:00.0: cvl_0_0 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:20.523 Found net devices under 0000:08:00.1: cvl_0_1 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:28:20.523 00:28:20.523 --- 10.0.0.2 ping statistics --- 00:28:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.523 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:20.523 00:28:20.523 --- 10.0.0.1 ping statistics --- 00:28:20.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.523 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1336965 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1336965 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1336965 ']' 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:20.523 00:11:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:20.523 [2024-07-16 00:11:55.031914] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:20.523 [2024-07-16 00:11:55.032004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.782 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.783 [2024-07-16 00:11:55.096228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.783 [2024-07-16 00:11:55.183781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.783 [2024-07-16 00:11:55.183839] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.783 [2024-07-16 00:11:55.183855] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.783 [2024-07-16 00:11:55.183868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.783 [2024-07-16 00:11:55.183887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.783 [2024-07-16 00:11:55.183975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.783 [2024-07-16 00:11:55.184027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.783 [2024-07-16 00:11:55.184073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.783 [2024-07-16 00:11:55.184076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:21.069 00:11:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:24.352 00:11:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:24.352 00:11:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:24.352 00:11:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:28:24.352 00:11:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:24.609 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:24.609 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:28:24.609 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:24.609 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:24.609 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:24.867 [2024-07-16 00:11:59.218345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.867 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:25.125 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.125 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.382 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:25.382 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:25.640 00:11:59 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.898 [2024-07-16 00:12:00.249975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.898 00:12:00 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:26.155 00:12:00 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:28:26.156 00:12:00 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:26.156 00:12:00 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:26.156 00:12:00 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:28:27.526 Initializing NVMe Controllers 00:28:27.526 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:28:27.526 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:28:27.526 Initialization complete. Launching workers. 00:28:27.526 ======================================================== 00:28:27.526 Latency(us) 00:28:27.526 Device Information : IOPS MiB/s Average min max 00:28:27.526 PCIE (0000:84:00.0) NSID 1 from core 0: 65626.95 256.36 486.73 56.31 7317.17 00:28:27.526 ======================================================== 00:28:27.526 Total : 65626.95 256.36 486.73 56.31 7317.17 00:28:27.526 00:28:27.526 00:12:01 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.526 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.899 Initializing NVMe Controllers 00:28:28.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:28.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:28.899 Initialization complete. Launching workers. 00:28:28.899 ======================================================== 00:28:28.899 Latency(us) 00:28:28.899 Device Information : IOPS MiB/s Average min max 00:28:28.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.00 0.24 16364.66 163.54 45063.08 00:28:28.899 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 43.00 0.17 23731.67 7884.11 47964.64 00:28:28.899 ======================================================== 00:28:28.899 Total : 105.00 0.41 19381.63 163.54 47964.64 00:28:28.899 00:28:28.899 00:12:03 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.899 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.829 Initializing NVMe Controllers 00:28:29.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:29.829 Initialization complete. Launching workers. 00:28:29.829 ======================================================== 00:28:29.829 Latency(us) 00:28:29.829 Device Information : IOPS MiB/s Average min max 00:28:29.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7823.00 30.56 4093.00 748.58 7937.51 00:28:29.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3827.00 14.95 8429.31 6618.40 16026.76 00:28:29.829 ======================================================== 00:28:29.829 Total : 11650.00 45.51 5517.47 748.58 16026.76 00:28:29.829 00:28:30.086 00:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:30.086 00:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:30.086 00:12:04 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.606 Initializing NVMe Controllers 00:28:32.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.606 Controller IO queue size 128, less than required. 00:28:32.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.606 Controller IO queue size 128, less than required. 00:28:32.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:32.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:32.606 Initialization complete. Launching workers. 00:28:32.606 ======================================================== 00:28:32.606 Latency(us) 00:28:32.606 Device Information : IOPS MiB/s Average min max 00:28:32.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1612.98 403.25 80029.56 49885.01 119376.24 00:28:32.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.49 143.12 231190.28 86653.33 338589.13 00:28:32.606 ======================================================== 00:28:32.606 Total : 2185.48 546.37 119626.68 49885.01 338589.13 00:28:32.606 00:28:32.606 00:12:06 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:32.606 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.606 No valid NVMe controllers or AIO or URING devices found 00:28:32.606 Initializing NVMe Controllers 00:28:32.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.606 Controller IO queue size 128, less than required. 00:28:32.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.606 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:32.606 Controller IO queue size 128, less than required. 00:28:32.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:32.606 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:32.606 WARNING: Some requested NVMe devices were skipped 00:28:32.606 00:12:07 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:32.863 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.388 Initializing NVMe Controllers 00:28:35.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.388 Controller IO queue size 128, less than required. 00:28:35.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.388 Controller IO queue size 128, less than required. 00:28:35.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:35.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:35.388 Initialization complete. Launching workers. 00:28:35.388 00:28:35.388 ==================== 00:28:35.388 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:35.388 TCP transport: 00:28:35.388 polls: 11235 00:28:35.388 idle_polls: 7662 00:28:35.388 sock_completions: 3573 00:28:35.388 nvme_completions: 5677 00:28:35.388 submitted_requests: 8446 00:28:35.388 queued_requests: 1 00:28:35.388 00:28:35.388 ==================== 00:28:35.388 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:35.388 TCP transport: 00:28:35.388 polls: 11203 00:28:35.388 idle_polls: 7705 00:28:35.388 sock_completions: 3498 00:28:35.388 nvme_completions: 5693 00:28:35.388 submitted_requests: 8530 00:28:35.388 queued_requests: 1 00:28:35.388 ======================================================== 00:28:35.388 Latency(us) 00:28:35.388 Device Information : IOPS MiB/s Average min max 00:28:35.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1418.18 354.54 91216.91 52828.27 156148.96 00:28:35.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1422.17 355.54 91327.28 48886.39 145266.31 00:28:35.388 ======================================================== 00:28:35.388 Total : 2840.35 710.09 91272.17 48886.39 156148.96 00:28:35.388 00:28:35.388 00:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:35.388 00:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.388 00:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:35.388 00:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:84:00.0 ']' 00:28:35.388 00:12:09 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=27bf6f05-7776-401b-a430-eb5ec5eebae7 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 27bf6f05-7776-401b-a430-eb5ec5eebae7 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=27bf6f05-7776-401b-a430-eb5ec5eebae7 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:38.666 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:38.667 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:38.927 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:38.927 { 00:28:38.927 "uuid": "27bf6f05-7776-401b-a430-eb5ec5eebae7", 00:28:38.927 "name": "lvs_0", 00:28:38.927 "base_bdev": "Nvme0n1", 00:28:38.927 "total_data_clusters": 238234, 00:28:38.927 "free_clusters": 238234, 00:28:38.927 "block_size": 512, 00:28:38.927 "cluster_size": 4194304 00:28:38.927 } 00:28:38.927 ]' 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="27bf6f05-7776-401b-a430-eb5ec5eebae7") .free_clusters' 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="27bf6f05-7776-401b-a430-eb5ec5eebae7") .cluster_size' 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:38.928 952936 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:38.928 00:12:13 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27bf6f05-7776-401b-a430-eb5ec5eebae7 lbd_0 20480 00:28:39.860 00:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=5ae399c6-0d75-4361-8305-a3fd96fe5ac9 00:28:39.860 00:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 5ae399c6-0d75-4361-8305-a3fd96fe5ac9 lvs_n_0 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=cf1e4621-fd84-4ac7-a70a-c75603d2878a 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb cf1e4621-fd84-4ac7-a70a-c75603d2878a 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=cf1e4621-fd84-4ac7-a70a-c75603d2878a 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:40.479 00:12:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:40.736 { 00:28:40.736 "uuid": "27bf6f05-7776-401b-a430-eb5ec5eebae7", 00:28:40.736 "name": "lvs_0", 00:28:40.736 "base_bdev": "Nvme0n1", 00:28:40.736 "total_data_clusters": 238234, 00:28:40.736 "free_clusters": 233114, 00:28:40.736 "block_size": 512, 00:28:40.736 "cluster_size": 4194304 00:28:40.736 }, 00:28:40.736 { 00:28:40.736 "uuid": "cf1e4621-fd84-4ac7-a70a-c75603d2878a", 00:28:40.736 "name": "lvs_n_0", 00:28:40.736 "base_bdev": "5ae399c6-0d75-4361-8305-a3fd96fe5ac9", 00:28:40.736 "total_data_clusters": 5114, 00:28:40.736 "free_clusters": 5114, 00:28:40.736 "block_size": 512, 00:28:40.736 "cluster_size": 4194304 00:28:40.736 } 00:28:40.736 ]' 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="cf1e4621-fd84-4ac7-a70a-c75603d2878a") .free_clusters' 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="cf1e4621-fd84-4ac7-a70a-c75603d2878a") .cluster_size' 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:40.736 20456 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:40.736 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf1e4621-fd84-4ac7-a70a-c75603d2878a lbd_nest_0 20456 00:28:40.994 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2a998a7e-b461-48b6-a98b-3b616536132d 00:28:40.994 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.252 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:41.252 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2a998a7e-b461-48b6-a98b-3b616536132d 00:28:41.510 00:12:15 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.768 00:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:41.768 00:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:41.768 00:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:41.768 00:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:41.768 00:12:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.768 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.959 Initializing NVMe Controllers 00:28:53.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.959 Initialization complete. Launching workers. 00:28:53.959 ======================================================== 00:28:53.959 Latency(us) 00:28:53.959 Device Information : IOPS MiB/s Average min max 00:28:53.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 51.19 0.02 19602.17 194.91 45838.67 00:28:53.959 ======================================================== 00:28:53.959 Total : 51.19 0.02 19602.17 194.91 45838.67 00:28:53.959 00:28:53.959 00:12:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:53.959 00:12:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.959 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.920 Initializing NVMe Controllers 00:29:03.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.920 Initialization complete. Launching workers. 00:29:03.920 ======================================================== 00:29:03.920 Latency(us) 00:29:03.920 Device Information : IOPS MiB/s Average min max 00:29:03.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.56 9.82 12737.98 6984.13 47892.50 00:29:03.920 ======================================================== 00:29:03.920 Total : 78.56 9.82 12737.98 6984.13 47892.50 00:29:03.920 00:29:03.920 00:12:36 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:03.920 00:12:36 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:03.920 00:12:36 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.920 Initializing NVMe Controllers 00:29:13.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.920 Initialization complete. Launching workers. 00:29:13.920 ======================================================== 00:29:13.920 Latency(us) 00:29:13.920 Device Information : IOPS MiB/s Average min max 00:29:13.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6640.13 3.24 4826.06 330.48 43089.17 00:29:13.920 ======================================================== 00:29:13.920 Total : 6640.13 3.24 4826.06 330.48 43089.17 00:29:13.920 00:29:13.920 00:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:13.920 00:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:13.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.888 Initializing NVMe Controllers 00:29:23.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:23.888 Initialization complete. Launching workers. 00:29:23.888 ======================================================== 00:29:23.888 Latency(us) 00:29:23.888 Device Information : IOPS MiB/s Average min max 00:29:23.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3851.99 481.50 8308.53 729.00 17592.51 00:29:23.888 ======================================================== 00:29:23.888 Total : 3851.99 481.50 8308.53 729.00 17592.51 00:29:23.888 00:29:23.888 00:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:23.888 00:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:23.888 00:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:23.888 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.883 Initializing NVMe Controllers 00:29:33.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:33.883 Controller IO queue size 128, less than required. 00:29:33.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:33.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:33.883 Initialization complete. Launching workers. 00:29:33.883 ======================================================== 00:29:33.883 Latency(us) 00:29:33.883 Device Information : IOPS MiB/s Average min max 00:29:33.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10201.17 4.98 12555.06 1815.32 23475.27 00:29:33.883 ======================================================== 00:29:33.883 Total : 10201.17 4.98 12555.06 1815.32 23475.27 00:29:33.883 00:29:33.883 00:13:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:33.883 00:13:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:33.883 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.073 Initializing NVMe Controllers 00:29:46.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.073 Controller IO queue size 128, less than required. 00:29:46.073 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.073 Initialization complete. Launching workers. 00:29:46.073 ======================================================== 00:29:46.073 Latency(us) 00:29:46.073 Device Information : IOPS MiB/s Average min max 00:29:46.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1164.50 145.56 110368.86 15483.65 223500.22 00:29:46.073 ======================================================== 00:29:46.073 Total : 1164.50 145.56 110368.86 15483.65 223500.22 00:29:46.073 00:29:46.073 00:13:18 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.073 00:13:18 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a998a7e-b461-48b6-a98b-3b616536132d 00:29:46.073 00:13:19 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:46.073 00:13:19 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ae399c6-0d75-4361-8305-a3fd96fe5ac9 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.073 rmmod nvme_tcp 00:29:46.073 rmmod nvme_fabrics 00:29:46.073 rmmod nvme_keyring 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1336965 ']' 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1336965 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1336965 ']' 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1336965 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1336965 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1336965' 00:29:46.073 killing process with pid 1336965 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1336965 00:29:46.073 00:13:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1336965 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.981 00:13:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.888 00:13:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.888 00:29:49.888 real 1m30.954s 00:29:49.888 user 5m36.580s 00:29:49.888 sys 0m15.032s 00:29:49.888 00:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:49.888 00:13:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:49.888 ************************************ 00:29:49.888 END TEST nvmf_perf 00:29:49.888 ************************************ 00:29:49.888 00:13:24 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:49.888 00:13:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:49.888 00:13:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:49.888 00:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.888 ************************************ 00:29:49.888 START TEST nvmf_fio_host 00:29:49.888 ************************************ 00:29:49.888 00:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:49.888 * Looking for test storage... 00:29:49.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.889 00:13:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.265 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:51.266 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:51.266 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:51.266 Found net devices under 0000:08:00.0: cvl_0_0 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:51.266 Found net devices under 0000:08:00.1: cvl_0_1 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:29:51.266 00:29:51.266 --- 10.0.0.2 ping statistics --- 00:29:51.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.266 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:29:51.266 00:29:51.266 --- 10.0.0.1 ping statistics --- 00:29:51.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.266 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.266 00:13:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1346577 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1346577 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1346577 ']' 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:51.525 00:13:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.525 [2024-07-16 00:13:25.846097] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:51.525 [2024-07-16 00:13:25.846210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.525 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.525 [2024-07-16 00:13:25.917329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.525 [2024-07-16 00:13:26.008726] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.525 [2024-07-16 00:13:26.008787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.525 [2024-07-16 00:13:26.008803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.525 [2024-07-16 00:13:26.008817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.525 [2024-07-16 00:13:26.008829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.525 [2024-07-16 00:13:26.008887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.525 [2024-07-16 00:13:26.008937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.525 [2024-07-16 00:13:26.008961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.525 [2024-07-16 00:13:26.008965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.782 00:13:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:51.782 00:13:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:51.782 00:13:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.039 [2024-07-16 00:13:26.397475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.039 00:13:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:52.039 00:13:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.039 00:13:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.039 00:13:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:52.297 Malloc1 00:29:52.297 00:13:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.554 00:13:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:52.812 00:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.069 [2024-07-16 00:13:27.389526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.069 00:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:53.328 00:13:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:53.586 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:53.586 fio-3.35 00:29:53.586 Starting 1 thread 00:29:53.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.138 00:29:56.138 test: (groupid=0, jobs=1): err= 0: pid=1346848: Tue Jul 16 00:13:30 2024 00:29:56.138 read: IOPS=7227, BW=28.2MiB/s (29.6MB/s)(56.7MiB/2008msec) 00:29:56.138 slat (usec): min=2, max=103, avg= 2.52, stdev= 1.29 00:29:56.138 clat (usec): min=3127, max=15189, avg=9608.73, stdev=832.44 00:29:56.138 lat (usec): min=3149, max=15192, avg=9611.25, stdev=832.35 00:29:56.138 clat percentiles (usec): 00:29:56.138 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:56.138 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:29:56.138 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:29:56.138 | 99.00th=[11469], 99.50th=[11731], 99.90th=[14615], 99.95th=[15008], 00:29:56.138 | 99.99th=[15139] 00:29:56.138 bw ( KiB/s): min=27952, max=29400, per=99.89%, avg=28878.00, stdev=659.51, samples=4 00:29:56.138 iops : min= 6988, max= 7350, avg=7219.50, stdev=164.88, samples=4 00:29:56.138 write: IOPS=7194, BW=28.1MiB/s (29.5MB/s)(56.4MiB/2008msec); 0 zone resets 00:29:56.138 slat (nsec): min=2209, max=87096, avg=2636.27, stdev=885.81 00:29:56.138 clat (usec): min=1258, max=14790, avg=8069.93, stdev=685.22 00:29:56.138 lat (usec): min=1264, max=14792, avg=8072.56, stdev=685.17 00:29:56.138 clat percentiles (usec): 00:29:56.138 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7570], 00:29:56.138 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:29:56.138 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:29:56.138 | 99.00th=[ 9372], 99.50th=[ 9765], 99.90th=[12780], 99.95th=[14222], 00:29:56.138 | 99.99th=[14746] 00:29:56.138 bw ( KiB/s): min=28608, max=28944, per=100.00%, avg=28788.00, stdev=149.88, samples=4 00:29:56.138 iops : min= 7152, max= 7236, avg=7197.00, stdev=37.47, samples=4 00:29:56.138 lat (msec) : 2=0.01%, 4=0.10%, 10=84.00%, 20=15.89% 00:29:56.138 cpu : usr=66.52%, sys=32.14%, ctx=78, majf=0, minf=31 00:29:56.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:56.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:56.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:56.138 issued rwts: total=14513,14447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:56.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:56.138 00:29:56.138 Run status group 0 (all jobs): 00:29:56.138 READ: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=56.7MiB (59.4MB), run=2008-2008msec 00:29:56.138 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=56.4MiB (59.2MB), run=2008-2008msec 00:29:56.138 00:13:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.138 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.138 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:56.139 00:13:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:56.139 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:56.139 fio-3.35 00:29:56.139 Starting 1 thread 00:29:56.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.679 00:29:58.679 test: (groupid=0, jobs=1): err= 0: pid=1347093: Tue Jul 16 00:13:32 2024 00:29:58.679 read: IOPS=6716, BW=105MiB/s (110MB/s)(211MiB/2006msec) 00:29:58.679 slat (nsec): min=3117, max=94482, avg=4181.96, stdev=1760.10 00:29:58.679 clat (usec): min=2395, max=22563, avg=10618.29, stdev=2347.15 00:29:58.679 lat (usec): min=2399, max=22566, avg=10622.48, stdev=2347.31 00:29:58.679 clat percentiles (usec): 00:29:58.679 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8979], 00:29:58.679 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:29:58.679 | 70.00th=[11338], 80.00th=[12387], 90.00th=[13960], 95.00th=[15139], 00:29:58.679 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19792], 00:29:58.679 | 99.99th=[21365] 00:29:58.679 bw ( KiB/s): min=43616, max=62080, per=48.94%, avg=52592.00, stdev=8630.93, samples=4 00:29:58.679 iops : min= 2726, max= 3880, avg=3287.00, stdev=539.43, samples=4 00:29:58.679 write: IOPS=3913, BW=61.1MiB/s (64.1MB/s)(108MiB/1769msec); 0 zone resets 00:29:58.679 slat (usec): min=32, max=325, avg=38.73, stdev= 9.84 00:29:58.679 clat (usec): min=5337, max=29123, avg=15491.80, stdev=2485.01 00:29:58.679 lat (usec): min=5370, max=29156, avg=15530.53, stdev=2486.40 00:29:58.679 clat percentiles (usec): 00:29:58.679 | 1.00th=[ 9765], 5.00th=[11076], 10.00th=[12518], 20.00th=[13566], 00:29:58.679 | 30.00th=[14353], 40.00th=[15008], 50.00th=[15533], 60.00th=[16057], 00:29:58.679 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18744], 95.00th=[19530], 00:29:58.679 | 99.00th=[20841], 99.50th=[22414], 99.90th=[24511], 99.95th=[24773], 00:29:58.679 | 99.99th=[29230] 00:29:58.679 bw ( KiB/s): min=46048, max=65120, per=87.72%, avg=54928.00, stdev=8773.23, samples=4 00:29:58.679 iops : min= 2878, max= 4070, avg=3433.00, stdev=548.33, samples=4 00:29:58.679 lat (msec) : 4=0.25%, 10=30.27%, 20=68.59%, 50=0.88% 00:29:58.679 cpu : usr=77.76%, sys=20.90%, ctx=47, majf=0, minf=63 00:29:58.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:58.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.679 issued rwts: total=13474,6923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.679 00:29:58.679 Run status group 0 (all jobs): 00:29:58.679 READ: bw=105MiB/s (110MB/s), 105MiB/s-105MiB/s (110MB/s-110MB/s), io=211MiB (221MB), run=2006-2006msec 00:29:58.679 WRITE: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=108MiB (113MB), run=1769-1769msec 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:29:58.679 00:13:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 -i 10.0.0.2 00:30:01.959 Nvme0n1 00:30:01.959 00:13:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:04.480 00:13:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=7bdac06e-7ddc-49e1-a4b6-0de03f4b268e 00:30:04.480 00:13:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 7bdac06e-7ddc-49e1-a4b6-0de03f4b268e 00:30:04.481 00:13:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=7bdac06e-7ddc-49e1-a4b6-0de03f4b268e 00:30:04.481 00:13:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:04.481 00:13:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:04.481 00:13:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:04.481 00:13:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:04.737 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:04.737 { 00:30:04.737 "uuid": "7bdac06e-7ddc-49e1-a4b6-0de03f4b268e", 00:30:04.737 "name": "lvs_0", 00:30:04.737 "base_bdev": "Nvme0n1", 00:30:04.737 "total_data_clusters": 930, 00:30:04.737 "free_clusters": 930, 00:30:04.737 "block_size": 512, 00:30:04.737 "cluster_size": 1073741824 00:30:04.737 } 00:30:04.737 ]' 00:30:04.737 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="7bdac06e-7ddc-49e1-a4b6-0de03f4b268e") .free_clusters' 00:30:04.737 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:04.737 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="7bdac06e-7ddc-49e1-a4b6-0de03f4b268e") .cluster_size' 00:30:05.049 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:05.049 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:05.049 00:13:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:05.049 952320 00:30:05.049 00:13:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:05.306 d9010bbd-09d0-4709-a7b6-0d5632264dcd 00:30:05.306 00:13:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:05.563 00:13:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.126 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:06.383 00:13:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.383 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:06.383 fio-3.35 00:30:06.383 Starting 1 thread 00:30:06.383 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.908 00:30:08.908 test: (groupid=0, jobs=1): err= 0: pid=1348109: Tue Jul 16 00:13:43 2024 00:30:08.908 read: IOPS=5318, BW=20.8MiB/s (21.8MB/s)(41.7MiB/2008msec) 00:30:08.908 slat (usec): min=2, max=227, avg= 2.74, stdev= 2.74 00:30:08.908 clat (usec): min=819, max=171907, avg=13165.15, stdev=12252.68 00:30:08.908 lat (usec): min=823, max=171982, avg=13167.89, stdev=12253.21 00:30:08.908 clat percentiles (msec): 00:30:08.908 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:30:08.908 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:30:08.908 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 14], 00:30:08.908 | 99.00th=[ 16], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:08.908 | 99.99th=[ 171] 00:30:08.908 bw ( KiB/s): min=15264, max=23232, per=99.66%, avg=21200.00, stdev=3957.60, samples=4 00:30:08.908 iops : min= 3816, max= 5808, avg=5300.00, stdev=989.40, samples=4 00:30:08.908 write: IOPS=5301, BW=20.7MiB/s (21.7MB/s)(41.6MiB/2008msec); 0 zone resets 00:30:08.908 slat (usec): min=2, max=143, avg= 2.86, stdev= 1.60 00:30:08.908 clat (usec): min=310, max=169631, avg=10817.23, stdev=11520.88 00:30:08.908 lat (usec): min=313, max=169640, avg=10820.09, stdev=11521.34 00:30:08.908 clat percentiles (msec): 00:30:08.908 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:30:08.908 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:30:08.908 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:30:08.908 | 99.00th=[ 13], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:30:08.908 | 99.99th=[ 169] 00:30:08.908 bw ( KiB/s): min=16112, max=23000, per=99.95%, avg=21194.00, stdev=3389.44, samples=4 00:30:08.908 iops : min= 4028, max= 5750, avg=5298.50, stdev=847.36, samples=4 00:30:08.908 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:08.908 lat (msec) : 2=0.03%, 4=0.09%, 10=26.13%, 20=73.11%, 50=0.01% 00:30:08.908 lat (msec) : 250=0.60% 00:30:08.908 cpu : usr=66.52%, sys=32.24%, ctx=87, majf=0, minf=31 00:30:08.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:08.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:08.908 issued rwts: total=10679,10645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:08.908 00:30:08.908 Run status group 0 (all jobs): 00:30:08.908 READ: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=41.7MiB (43.7MB), run=2008-2008msec 00:30:08.908 WRITE: bw=20.7MiB/s (21.7MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=41.6MiB (43.6MB), run=2008-2008msec 00:30:08.908 00:13:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:09.166 00:13:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:10.554 00:13:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96 00:30:10.554 00:13:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96 00:30:10.554 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:10.555 { 00:30:10.555 "uuid": "7bdac06e-7ddc-49e1-a4b6-0de03f4b268e", 00:30:10.555 "name": "lvs_0", 00:30:10.555 "base_bdev": "Nvme0n1", 00:30:10.555 "total_data_clusters": 930, 00:30:10.555 "free_clusters": 0, 00:30:10.555 "block_size": 512, 00:30:10.555 "cluster_size": 1073741824 00:30:10.555 }, 00:30:10.555 { 00:30:10.555 "uuid": "daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96", 00:30:10.555 "name": "lvs_n_0", 00:30:10.555 "base_bdev": "d9010bbd-09d0-4709-a7b6-0d5632264dcd", 00:30:10.555 "total_data_clusters": 237847, 00:30:10.555 "free_clusters": 237847, 00:30:10.555 "block_size": 512, 00:30:10.555 "cluster_size": 4194304 00:30:10.555 } 00:30:10.555 ]' 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96") .free_clusters' 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="daa0b2c0-b8e2-4300-8dc1-3fe1218d7b96") .cluster_size' 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:10.555 951388 00:30:10.555 00:13:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:11.488 619fed39-e00e-450f-8255-9d69355401ab 00:30:11.488 00:13:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:11.745 00:13:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:12.004 00:13:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:12.262 00:13:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:12.521 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:12.521 fio-3.35 00:30:12.521 Starting 1 thread 00:30:12.521 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.048 00:30:15.048 test: (groupid=0, jobs=1): err= 0: pid=1348649: Tue Jul 16 00:13:49 2024 00:30:15.048 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(40.0MiB/2010msec) 00:30:15.048 slat (usec): min=2, max=225, avg= 2.81, stdev= 2.96 00:30:15.048 clat (usec): min=4634, max=24463, avg=13677.84, stdev=1277.87 00:30:15.048 lat (usec): min=4671, max=24465, avg=13680.65, stdev=1277.67 00:30:15.048 clat percentiles (usec): 00:30:15.048 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12256], 20.00th=[12649], 00:30:15.048 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:30:15.048 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:30:15.048 | 99.00th=[16450], 99.50th=[16581], 99.90th=[20055], 99.95th=[23725], 00:30:15.048 | 99.99th=[24511] 00:30:15.048 bw ( KiB/s): min=19272, max=20728, per=99.71%, avg=20300.00, stdev=688.76, samples=4 00:30:15.048 iops : min= 4818, max= 5182, avg=5075.00, stdev=172.19, samples=4 00:30:15.048 write: IOPS=5070, BW=19.8MiB/s (20.8MB/s)(39.8MiB/2010msec); 0 zone resets 00:30:15.049 slat (usec): min=2, max=157, avg= 2.93, stdev= 1.79 00:30:15.049 clat (usec): min=3264, max=21645, avg=11309.36, stdev=1050.25 00:30:15.049 lat (usec): min=3278, max=21647, avg=11312.29, stdev=1050.11 00:30:15.049 clat percentiles (usec): 00:30:15.049 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:30:15.049 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:30:15.049 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:30:15.049 | 99.00th=[13566], 99.50th=[13829], 99.90th=[18220], 99.95th=[20055], 00:30:15.049 | 99.99th=[21627] 00:30:15.049 bw ( KiB/s): min=20096, max=20480, per=100.00%, avg=20288.00, stdev=165.25, samples=4 00:30:15.049 iops : min= 5024, max= 5120, avg=5072.00, stdev=41.31, samples=4 00:30:15.049 lat (msec) : 4=0.04%, 10=4.29%, 20=95.59%, 50=0.08% 00:30:15.049 cpu : usr=67.40%, sys=31.36%, ctx=50, majf=0, minf=31 00:30:15.049 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:15.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:15.049 issued rwts: total=10230,10191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.049 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:15.049 00:30:15.049 Run status group 0 (all jobs): 00:30:15.049 READ: bw=19.9MiB/s (20.8MB/s), 19.9MiB/s-19.9MiB/s (20.8MB/s-20.8MB/s), io=40.0MiB (41.9MB), run=2010-2010msec 00:30:15.049 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=39.8MiB (41.7MB), run=2010-2010msec 00:30:15.049 00:13:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:15.049 00:13:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:15.049 00:13:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:19.267 00:13:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:19.267 00:13:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:22.546 00:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:22.546 00:13:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.445 rmmod nvme_tcp 00:30:24.445 rmmod nvme_fabrics 00:30:24.445 rmmod nvme_keyring 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1346577 ']' 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1346577 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1346577 ']' 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1346577 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1346577 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1346577' 00:30:24.445 killing process with pid 1346577 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1346577 00:30:24.445 00:13:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1346577 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.705 00:13:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.608 00:14:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:26.608 00:30:26.608 real 0m36.920s 00:30:26.608 user 2m22.803s 00:30:26.608 sys 0m6.397s 00:30:26.608 00:14:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:26.608 00:14:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.608 ************************************ 00:30:26.608 END TEST nvmf_fio_host 00:30:26.608 ************************************ 00:30:26.608 00:14:01 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:26.608 00:14:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:26.608 00:14:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:26.608 00:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:26.608 ************************************ 00:30:26.608 START TEST nvmf_failover 00:30:26.608 ************************************ 00:30:26.608 00:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:26.865 * Looking for test storage... 00:30:26.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.865 00:14:01 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:26.866 00:14:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:30:28.242 Found 0000:08:00.0 (0x8086 - 0x159b) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:30:28.242 Found 0000:08:00.1 (0x8086 - 0x159b) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:28.242 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:30:28.243 Found net devices under 0000:08:00.0: cvl_0_0 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:30:28.243 Found net devices under 0000:08:00.1: cvl_0_1 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:28.243 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:28.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:30:28.501 00:30:28.501 --- 10.0.0.2 ping statistics --- 00:30:28.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.501 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:30:28.501 00:30:28.501 --- 10.0.0.1 ping statistics --- 00:30:28.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.501 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1351156 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1351156 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1351156 ']' 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:28.501 00:14:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.501 [2024-07-16 00:14:02.939467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:28.501 [2024-07-16 00:14:02.939568] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.501 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.501 [2024-07-16 00:14:03.005665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.760 [2024-07-16 00:14:03.096009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.760 [2024-07-16 00:14:03.096066] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.760 [2024-07-16 00:14:03.096082] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.760 [2024-07-16 00:14:03.096096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.760 [2024-07-16 00:14:03.096108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.760 [2024-07-16 00:14:03.096173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.760 [2024-07-16 00:14:03.096205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.760 [2024-07-16 00:14:03.096214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.760 00:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.017 [2024-07-16 00:14:03.501825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.017 00:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:29.581 Malloc0 00:30:29.581 00:14:03 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.838 00:14:04 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.097 00:14:04 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.355 [2024-07-16 00:14:04.707244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.355 00:14:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.614 [2024-07-16 00:14:04.955949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.614 00:14:04 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.872 [2024-07-16 00:14:05.192636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1351371 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1351371 /var/tmp/bdevperf.sock 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1351371 ']' 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:30.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:30.872 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:31.130 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:31.130 00:14:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:31.130 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.696 NVMe0n1 00:30:31.696 00:14:05 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.954 00:30:31.955 00:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1351471 00:30:31.955 00:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:31.955 00:14:06 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:32.889 00:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.147 00:14:07 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:36.424 00:14:10 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.681 00:30:36.681 00:14:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:36.973 [2024-07-16 00:14:11.321756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.321999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.973 [2024-07-16 00:14:11.322326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 [2024-07-16 00:14:11.322439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12641e0 is same with the state(5) to be set 00:30:36.974 00:14:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:40.248 00:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.248 [2024-07-16 00:14:14.618127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.248 00:14:14 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:41.179 00:14:15 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.436 [2024-07-16 00:14:15.920631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.920995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.436 [2024-07-16 00:14:15.921471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 [2024-07-16 00:14:15.921671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1264550 is same with the state(5) to be set 00:30:41.437 00:14:15 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1351471 00:30:48.055 0 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1351371 ']' 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1351371' 00:30:48.055 killing process with pid 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1351371 00:30:48.055 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:48.055 [2024-07-16 00:14:05.255025] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:48.055 [2024-07-16 00:14:05.255150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351371 ] 00:30:48.055 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.055 [2024-07-16 00:14:05.308804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.055 [2024-07-16 00:14:05.395619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.055 Running I/O for 15 seconds... 00:30:48.055 [2024-07-16 00:14:07.631744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.631813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.631844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.631862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.631881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.631897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.631915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.631932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.631950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.631965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.631984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.632000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.632033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.055 [2024-07-16 00:14:07.632067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.055 [2024-07-16 00:14:07.632759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.055 [2024-07-16 00:14:07.632775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.632842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.632877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.632911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.632945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.632979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.632997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.056 [2024-07-16 00:14:07.633714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.633973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.633989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.056 [2024-07-16 00:14:07.634234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.056 [2024-07-16 00:14:07.634252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.634968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.634984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.635018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.635052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.635085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.635119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.057 [2024-07-16 00:14:07.635685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.057 [2024-07-16 00:14:07.635711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.057 [2024-07-16 00:14:07.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:07.635964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.635982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.635999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:07.636214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.058 [2024-07-16 00:14:07.636265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.058 [2024-07-16 00:14:07.636279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70512 len:8 PRP1 0x0 PRP2 0x0 00:30:48.058 [2024-07-16 00:14:07.636294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636355] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe93c10 was disconnected and freed. reset controller. 00:30:48.058 [2024-07-16 00:14:07.636379] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:48.058 [2024-07-16 00:14:07.636420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.058 [2024-07-16 00:14:07.636439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.058 [2024-07-16 00:14:07.636489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.058 [2024-07-16 00:14:07.636524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.058 [2024-07-16 00:14:07.636555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:07.636570] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.058 [2024-07-16 00:14:07.640639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.058 [2024-07-16 00:14:07.640680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe75360 (9): Bad file descriptor 00:30:48.058 [2024-07-16 00:14:07.718329] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.058 [2024-07-16 00:14:11.324126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.058 [2024-07-16 00:14:11.324195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.058 [2024-07-16 00:14:11.324863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.058 [2024-07-16 00:14:11.324879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.324897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.324913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.324931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.324947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.324964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.324980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.324998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.325976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.326014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.059 [2024-07-16 00:14:11.326032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.059 [2024-07-16 00:14:11.326048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.326980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.326996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.060 [2024-07-16 00:14:11.327308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.060 [2024-07-16 00:14:11.327326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.061 [2024-07-16 00:14:11.327583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.327638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.327653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.061 [2024-07-16 00:14:11.327745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.061 [2024-07-16 00:14:11.327778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.061 [2024-07-16 00:14:11.327809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.061 [2024-07-16 00:14:11.327840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.327860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe75360 is same with the state(5) to be set 00:30:48.061 [2024-07-16 00:14:11.328127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55264 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55272 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55280 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55288 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55312 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55320 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55328 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.061 [2024-07-16 00:14:11.328733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.061 [2024-07-16 00:14:11.328746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.061 [2024-07-16 00:14:11.328760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55344 len:8 PRP1 0x0 PRP2 0x0 00:30:48.061 [2024-07-16 00:14:11.328775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.328791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.328804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.328818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.328832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.328847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.328861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.328874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.328889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.328904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.328917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.328930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.328945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.328961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.328974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.328987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54360 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54368 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54376 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54384 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54392 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54400 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54408 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54416 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54424 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54432 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54440 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54448 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54456 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54464 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54472 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.062 [2024-07-16 00:14:11.329864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54352 len:8 PRP1 0x0 PRP2 0x0 00:30:48.062 [2024-07-16 00:14:11.329879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.062 [2024-07-16 00:14:11.329895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.062 [2024-07-16 00:14:11.329908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.329925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54480 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.329940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.329955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.329969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.329982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54488 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.329997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54496 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54504 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54512 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54520 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54528 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54536 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54544 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54552 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54560 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54568 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54576 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54584 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54592 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54600 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54608 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.063 [2024-07-16 00:14:11.330929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.063 [2024-07-16 00:14:11.330943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54616 len:8 PRP1 0x0 PRP2 0x0 00:30:48.063 [2024-07-16 00:14:11.330957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.063 [2024-07-16 00:14:11.330972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.330985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54624 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54632 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54640 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54648 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54656 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54664 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54672 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54680 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54688 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54696 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54704 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54712 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.064 [2024-07-16 00:14:11.331685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.064 [2024-07-16 00:14:11.331699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54720 len:8 PRP1 0x0 PRP2 0x0 00:30:48.064 [2024-07-16 00:14:11.331714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.064 [2024-07-16 00:14:11.331728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.331741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.331758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54728 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.331774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.331789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.331802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.331816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54736 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.331830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.331846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.331859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.331872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54744 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.331887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.331902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.331915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.331929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54752 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.331944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.331959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.331972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.331985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54760 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54768 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54776 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54784 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54792 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54800 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54808 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54816 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54824 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54832 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54840 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54848 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54856 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54864 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54872 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54880 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54888 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.332953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.332966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.332980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54896 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.332995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.333010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.065 [2024-07-16 00:14:11.333023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.065 [2024-07-16 00:14:11.333036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54904 len:8 PRP1 0x0 PRP2 0x0 00:30:48.065 [2024-07-16 00:14:11.333051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.065 [2024-07-16 00:14:11.333066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.333079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.333096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54912 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.333111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.333127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.339746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.339779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54920 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.339797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.339816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.339830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.339843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54928 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.339859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.339874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.339888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.339901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54936 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.339916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.339931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.339945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.339958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54944 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.339973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.339988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54952 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54960 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54968 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54976 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54984 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54992 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55000 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55008 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55016 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55024 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55032 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55040 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55048 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55056 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55064 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55072 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.340951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55080 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.340967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.066 [2024-07-16 00:14:11.340982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.066 [2024-07-16 00:14:11.340996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.066 [2024-07-16 00:14:11.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55088 len:8 PRP1 0x0 PRP2 0x0 00:30:48.066 [2024-07-16 00:14:11.341024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55096 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55104 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55168 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55176 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55184 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55192 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55200 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55208 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.341947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55216 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.341962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.341984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.341997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55224 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.342026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.342041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.342054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.342068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55232 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.342082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.342097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.342110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.342124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55240 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.342144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.342161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.342174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.342187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55248 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.342217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.067 [2024-07-16 00:14:11.342230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.067 [2024-07-16 00:14:11.342243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55256 len:8 PRP1 0x0 PRP2 0x0 00:30:48.067 [2024-07-16 00:14:11.342258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.067 [2024-07-16 00:14:11.342318] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x103eaf0 was disconnected and freed. reset controller. 00:30:48.067 [2024-07-16 00:14:11.342341] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:48.067 [2024-07-16 00:14:11.342358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.067 [2024-07-16 00:14:11.342417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe75360 (9): Bad file descriptor 00:30:48.068 [2024-07-16 00:14:11.346431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.068 [2024-07-16 00:14:11.513741] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.068 [2024-07-16 00:14:15.923633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.068 [2024-07-16 00:14:15.923679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.923974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.923991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.068 [2024-07-16 00:14:15.924772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.068 [2024-07-16 00:14:15.924789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.924976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.924992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.069 [2024-07-16 00:14:15.925789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.069 [2024-07-16 00:14:15.925805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.925839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.925873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.925908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.925941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.925980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.925998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.070 [2024-07-16 00:14:15.926234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.926980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.926997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.927015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.070 [2024-07-16 00:14:15.927049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.070 [2024-07-16 00:14:15.927065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.071 [2024-07-16 00:14:15.927350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113560 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113568 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113576 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113584 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113592 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113600 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113608 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113616 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113624 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113632 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.927958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.927972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.927985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113640 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113648 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113656 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113664 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113672 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113680 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113688 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113696 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113704 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.071 [2024-07-16 00:14:15.928511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113712 len:8 PRP1 0x0 PRP2 0x0 00:30:48.071 [2024-07-16 00:14:15.928526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.071 [2024-07-16 00:14:15.928542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.071 [2024-07-16 00:14:15.928555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.072 [2024-07-16 00:14:15.928568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113720 len:8 PRP1 0x0 PRP2 0x0 00:30:48.072 [2024-07-16 00:14:15.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.072 [2024-07-16 00:14:15.928612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.072 [2024-07-16 00:14:15.928625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113728 len:8 PRP1 0x0 PRP2 0x0 00:30:48.072 [2024-07-16 00:14:15.928640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.072 [2024-07-16 00:14:15.928673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.072 [2024-07-16 00:14:15.928686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113736 len:8 PRP1 0x0 PRP2 0x0 00:30:48.072 [2024-07-16 00:14:15.928701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928760] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe71340 was disconnected and freed. reset controller. 00:30:48.072 [2024-07-16 00:14:15.928784] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:48.072 [2024-07-16 00:14:15.928823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.072 [2024-07-16 00:14:15.928842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.072 [2024-07-16 00:14:15.928874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.072 [2024-07-16 00:14:15.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.072 [2024-07-16 00:14:15.928937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.072 [2024-07-16 00:14:15.928953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.072 [2024-07-16 00:14:15.929010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe75360 (9): Bad file descriptor 00:30:48.072 [2024-07-16 00:14:15.932999] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.072 [2024-07-16 00:14:16.009834] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.072 00:30:48.072 Latency(us) 00:30:48.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.072 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:48.072 Verification LBA range: start 0x0 length 0x4000 00:30:48.072 NVMe0n1 : 15.01 7654.03 29.90 676.04 0.00 15333.08 649.29 27573.67 00:30:48.072 =================================================================================================================== 00:30:48.072 Total : 7654.03 29.90 676.04 0.00 15333.08 649.29 27573.67 00:30:48.072 Received shutdown signal, test time was about 15.000000 seconds 00:30:48.072 00:30:48.072 Latency(us) 00:30:48.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.072 =================================================================================================================== 00:30:48.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1352797 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1352797 /var/tmp/bdevperf.sock 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1352797 ']' 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:48.072 00:14:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:48.072 00:14:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:48.072 00:14:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:48.072 00:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:48.072 [2024-07-16 00:14:22.226825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.072 00:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:48.072 [2024-07-16 00:14:22.479521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:48.072 00:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.330 NVMe0n1 00:30:48.600 00:14:22 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.862 00:30:48.862 00:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.119 00:30:49.119 00:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:49.119 00:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:49.376 00:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.634 00:14:23 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:52.913 00:14:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.913 00:14:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:52.913 00:14:27 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1353279 00:30:52.913 00:14:27 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:52.913 00:14:27 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1353279 00:30:53.849 0 00:30:53.849 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.849 [2024-07-16 00:14:21.759318] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:53.849 [2024-07-16 00:14:21.759428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352797 ] 00:30:53.849 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.849 [2024-07-16 00:14:21.819638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.849 [2024-07-16 00:14:21.905970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.849 [2024-07-16 00:14:23.908335] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:53.849 [2024-07-16 00:14:23.908422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.849 [2024-07-16 00:14:23.908445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.849 [2024-07-16 00:14:23.908464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.849 [2024-07-16 00:14:23.908479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.849 [2024-07-16 00:14:23.908494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.849 [2024-07-16 00:14:23.908509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.849 [2024-07-16 00:14:23.908525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.849 [2024-07-16 00:14:23.908539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.849 [2024-07-16 00:14:23.908554] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.849 [2024-07-16 00:14:23.908603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.849 [2024-07-16 00:14:23.908637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246e360 (9): Bad file descriptor 00:30:53.849 [2024-07-16 00:14:23.959575] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.849 Running I/O for 1 seconds... 00:30:53.849 00:30:53.849 Latency(us) 00:30:53.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.849 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.849 Verification LBA range: start 0x0 length 0x4000 00:30:53.849 NVMe0n1 : 1.01 7427.59 29.01 0.00 0.00 17156.01 3131.16 14466.47 00:30:53.849 =================================================================================================================== 00:30:53.849 Total : 7427.59 29.01 0.00 0.00 17156.01 3131.16 14466.47 00:30:53.849 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.849 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:54.414 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.414 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:54.414 00:14:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:54.671 00:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.929 00:14:29 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1352797 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1352797 ']' 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1352797 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1352797 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1352797' 00:30:58.209 killing process with pid 1352797 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1352797 00:30:58.209 00:14:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1352797 00:30:58.468 00:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:58.468 00:14:32 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.726 00:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:58.727 rmmod nvme_tcp 00:30:58.727 rmmod nvme_fabrics 00:30:58.727 rmmod nvme_keyring 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1351156 ']' 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1351156 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1351156 ']' 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1351156 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1351156 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1351156' 00:30:58.727 killing process with pid 1351156 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1351156 00:30:58.727 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1351156 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.013 00:14:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.540 00:14:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:01.540 00:31:01.540 real 0m34.360s 00:31:01.540 user 2m2.226s 00:31:01.540 sys 0m5.590s 00:31:01.540 00:14:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:01.540 00:14:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:01.540 ************************************ 00:31:01.540 END TEST nvmf_failover 00:31:01.540 ************************************ 00:31:01.540 00:14:35 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:01.540 00:14:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:01.540 00:14:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:01.540 00:14:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.540 ************************************ 00:31:01.540 START TEST nvmf_host_discovery 00:31:01.540 ************************************ 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:01.540 * Looking for test storage... 00:31:01.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:01.540 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:01.541 00:14:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:02.916 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:02.917 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:02.917 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:02.917 Found net devices under 0000:08:00.0: cvl_0_0 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:02.917 Found net devices under 0000:08:00.1: cvl_0_1 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:02.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:31:02.917 00:31:02.917 --- 10.0.0.2 ping statistics --- 00:31:02.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.917 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:31:02.917 00:31:02.917 --- 10.0.0.1 ping statistics --- 00:31:02.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.917 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1355236 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1355236 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1355236 ']' 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:02.917 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.917 [2024-07-16 00:14:37.260698] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:02.917 [2024-07-16 00:14:37.260794] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.918 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.918 [2024-07-16 00:14:37.325887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.918 [2024-07-16 00:14:37.415419] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.918 [2024-07-16 00:14:37.415478] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.918 [2024-07-16 00:14:37.415493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:02.918 [2024-07-16 00:14:37.415507] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:02.918 [2024-07-16 00:14:37.415519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.918 [2024-07-16 00:14:37.415548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 [2024-07-16 00:14:37.551955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 [2024-07-16 00:14:37.560105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 null0 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 null1 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1355260 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1355260 /tmp/host.sock 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1355260 ']' 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:03.176 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:03.176 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.176 [2024-07-16 00:14:37.636469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:03.176 [2024-07-16 00:14:37.636567] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355260 ] 00:31:03.176 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.434 [2024-07-16 00:14:37.698053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.434 [2024-07-16 00:14:37.785555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.434 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 [2024-07-16 00:14:38.189797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.692 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.693 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:03.951 00:14:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:04.518 [2024-07-16 00:14:38.909393] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:04.518 [2024-07-16 00:14:38.909424] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:04.518 [2024-07-16 00:14:38.909448] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:04.518 [2024-07-16 00:14:38.995762] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:04.776 [2024-07-16 00:14:39.093358] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:04.776 [2024-07-16 00:14:39.093394] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.035 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 [2024-07-16 00:14:39.670088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:05.294 [2024-07-16 00:14:39.670646] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:05.294 [2024-07-16 00:14:39.670689] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.294 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.295 [2024-07-16 00:14:39.756933] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:05.295 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:05.552 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:05.552 00:14:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:05.552 [2024-07-16 00:14:40.056319] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:05.552 [2024-07-16 00:14:40.056361] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:05.552 [2024-07-16 00:14:40.056373] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.486 [2024-07-16 00:14:40.907058] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:06.486 [2024-07-16 00:14:40.907096] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.486 [2024-07-16 00:14:40.915172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.486 [2024-07-16 00:14:40.915218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.486 [2024-07-16 00:14:40.915238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.486 [2024-07-16 00:14:40.915254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.486 [2024-07-16 00:14:40.915271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.486 [2024-07-16 00:14:40.915286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.486 [2024-07-16 00:14:40.915302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.486 [2024-07-16 00:14:40.915317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.486 [2024-07-16 00:14:40.915332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.486 [2024-07-16 00:14:40.925186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.486 [2024-07-16 00:14:40.935227] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.486 [2024-07-16 00:14:40.935421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.486 [2024-07-16 00:14:40.935463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.486 [2024-07-16 00:14:40.935483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.486 [2024-07-16 00:14:40.935510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.486 [2024-07-16 00:14:40.935534] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.486 [2024-07-16 00:14:40.935551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.486 [2024-07-16 00:14:40.935568] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.486 [2024-07-16 00:14:40.935592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.486 [2024-07-16 00:14:40.945308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.486 [2024-07-16 00:14:40.945432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.486 [2024-07-16 00:14:40.945461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.486 [2024-07-16 00:14:40.945479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.486 [2024-07-16 00:14:40.945503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.486 [2024-07-16 00:14:40.945526] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.486 [2024-07-16 00:14:40.945541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.486 [2024-07-16 00:14:40.945556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.486 [2024-07-16 00:14:40.945578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.486 [2024-07-16 00:14:40.955387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.486 [2024-07-16 00:14:40.955554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.486 [2024-07-16 00:14:40.955583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.486 [2024-07-16 00:14:40.955602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.486 [2024-07-16 00:14:40.955628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.486 [2024-07-16 00:14:40.955651] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.486 [2024-07-16 00:14:40.955667] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.486 [2024-07-16 00:14:40.955682] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.486 [2024-07-16 00:14:40.955703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.486 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.487 [2024-07-16 00:14:40.965468] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.487 [2024-07-16 00:14:40.965645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.487 [2024-07-16 00:14:40.965674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.487 [2024-07-16 00:14:40.965692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.487 [2024-07-16 00:14:40.965717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.487 [2024-07-16 00:14:40.965739] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.487 [2024-07-16 00:14:40.965755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.487 [2024-07-16 00:14:40.965770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.487 [2024-07-16 00:14:40.965792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.487 [2024-07-16 00:14:40.975548] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.487 [2024-07-16 00:14:40.975693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.487 [2024-07-16 00:14:40.975727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.487 [2024-07-16 00:14:40.975746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.487 [2024-07-16 00:14:40.975773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.487 [2024-07-16 00:14:40.975795] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.487 [2024-07-16 00:14:40.975811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.487 [2024-07-16 00:14:40.975826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.487 [2024-07-16 00:14:40.975847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.487 [2024-07-16 00:14:40.985624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:06.487 [2024-07-16 00:14:40.985767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:06.487 [2024-07-16 00:14:40.985795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x606a00 with addr=10.0.0.2, port=4420 00:31:06.487 [2024-07-16 00:14:40.985812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606a00 is same with the state(5) to be set 00:31:06.487 [2024-07-16 00:14:40.985837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606a00 (9): Bad file descriptor 00:31:06.487 [2024-07-16 00:14:40.985859] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:06.487 [2024-07-16 00:14:40.985874] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:06.487 [2024-07-16 00:14:40.985890] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:06.487 [2024-07-16 00:14:40.985911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:06.487 00:14:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.487 [2024-07-16 00:14:40.994320] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:06.487 [2024-07-16 00:14:40.994354] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:06.745 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:06.746 00:14:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.118 [2024-07-16 00:14:42.247954] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:08.118 [2024-07-16 00:14:42.247983] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:08.118 [2024-07-16 00:14:42.248008] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:08.118 [2024-07-16 00:14:42.335284] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:08.118 [2024-07-16 00:14:42.402258] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:08.118 [2024-07-16 00:14:42.402307] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.118 request: 00:31:08.118 { 00:31:08.118 "name": "nvme", 00:31:08.118 "trtype": "tcp", 00:31:08.118 "traddr": "10.0.0.2", 00:31:08.118 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:08.118 "adrfam": "ipv4", 00:31:08.118 "trsvcid": "8009", 00:31:08.118 "wait_for_attach": true, 00:31:08.118 "method": "bdev_nvme_start_discovery", 00:31:08.118 "req_id": 1 00:31:08.118 } 00:31:08.118 Got JSON-RPC error response 00:31:08.118 response: 00:31:08.118 { 00:31:08.118 "code": -17, 00:31:08.118 "message": "File exists" 00:31:08.118 } 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:08.118 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 request: 00:31:08.119 { 00:31:08.119 "name": "nvme_second", 00:31:08.119 "trtype": "tcp", 00:31:08.119 "traddr": "10.0.0.2", 00:31:08.119 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:08.119 "adrfam": "ipv4", 00:31:08.119 "trsvcid": "8009", 00:31:08.119 "wait_for_attach": true, 00:31:08.119 "method": "bdev_nvme_start_discovery", 00:31:08.119 "req_id": 1 00:31:08.119 } 00:31:08.119 Got JSON-RPC error response 00:31:08.119 response: 00:31:08.119 { 00:31:08.119 "code": -17, 00:31:08.119 "message": "File exists" 00:31:08.119 } 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:08.119 00:14:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:09.491 [2024-07-16 00:14:43.614699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.491 [2024-07-16 00:14:43.614765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x638ed0 with addr=10.0.0.2, port=8010 00:31:09.491 [2024-07-16 00:14:43.614794] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.491 [2024-07-16 00:14:43.614812] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.491 [2024-07-16 00:14:43.614828] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:10.423 [2024-07-16 00:14:44.617173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:10.423 [2024-07-16 00:14:44.617228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x638ed0 with addr=10.0.0.2, port=8010 00:31:10.423 [2024-07-16 00:14:44.617256] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:10.423 [2024-07-16 00:14:44.617273] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.423 [2024-07-16 00:14:44.617288] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:11.357 [2024-07-16 00:14:45.619370] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:11.357 request: 00:31:11.357 { 00:31:11.357 "name": "nvme_second", 00:31:11.357 "trtype": "tcp", 00:31:11.357 "traddr": "10.0.0.2", 00:31:11.357 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:11.357 "adrfam": "ipv4", 00:31:11.357 "trsvcid": "8010", 00:31:11.357 "attach_timeout_ms": 3000, 00:31:11.357 "method": "bdev_nvme_start_discovery", 00:31:11.357 "req_id": 1 00:31:11.357 } 00:31:11.357 Got JSON-RPC error response 00:31:11.357 response: 00:31:11.357 { 00:31:11.357 "code": -110, 00:31:11.357 "message": "Connection timed out" 00:31:11.357 } 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1355260 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:11.357 rmmod nvme_tcp 00:31:11.357 rmmod nvme_fabrics 00:31:11.357 rmmod nvme_keyring 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:11.357 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1355236 ']' 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1355236 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1355236 ']' 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1355236 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1355236 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1355236' 00:31:11.358 killing process with pid 1355236 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1355236 00:31:11.358 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1355236 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.618 00:14:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.526 00:14:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.526 00:31:13.526 real 0m12.518s 00:31:13.526 user 0m18.758s 00:31:13.526 sys 0m2.366s 00:31:13.526 00:14:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:13.526 00:14:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.526 ************************************ 00:31:13.526 END TEST nvmf_host_discovery 00:31:13.526 ************************************ 00:31:13.526 00:14:48 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:13.526 00:14:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:13.526 00:14:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:13.526 00:14:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.526 ************************************ 00:31:13.526 START TEST nvmf_host_multipath_status 00:31:13.526 ************************************ 00:31:13.526 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:13.785 * Looking for test storage... 00:31:13.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.785 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:13.786 00:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:15.161 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:15.161 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:15.161 Found net devices under 0000:08:00.0: cvl_0_0 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:15.161 Found net devices under 0000:08:00.1: cvl_0_1 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.161 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.162 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.162 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.162 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:15.162 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:15.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:31:15.421 00:31:15.421 --- 10.0.0.2 ping statistics --- 00:31:15.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.421 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:31:15.421 00:31:15.421 --- 10.0.0.1 ping statistics --- 00:31:15.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.421 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1357569 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1357569 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1357569 ']' 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:15.421 00:14:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.421 [2024-07-16 00:14:49.786379] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:15.421 [2024-07-16 00:14:49.786494] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.421 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.421 [2024-07-16 00:14:49.853333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.679 [2024-07-16 00:14:49.940057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.679 [2024-07-16 00:14:49.940108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.679 [2024-07-16 00:14:49.940124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.679 [2024-07-16 00:14:49.940144] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.679 [2024-07-16 00:14:49.940158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.679 [2024-07-16 00:14:49.940242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.679 [2024-07-16 00:14:49.940275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1357569 00:31:15.680 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:15.938 [2024-07-16 00:14:50.342549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.938 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:16.202 Malloc0 00:31:16.202 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:16.486 00:14:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.742 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.999 [2024-07-16 00:14:51.389427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.999 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:17.257 [2024-07-16 00:14:51.630069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1357781 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1357781 /var/tmp/bdevperf.sock 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1357781 ']' 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:17.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:17.257 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:17.514 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:17.514 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:17.514 00:14:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:17.772 00:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:18.337 Nvme0n1 00:31:18.337 00:14:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:18.594 Nvme0n1 00:31:18.852 00:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:18.852 00:14:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:20.760 00:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:20.760 00:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:21.018 00:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:21.275 00:14:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:22.208 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:22.208 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:22.208 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.208 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.466 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.466 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:22.466 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.466 00:14:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.736 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.736 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.736 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.736 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:23.302 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.302 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:23.302 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.302 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:23.560 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.560 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:23.560 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.560 00:14:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:23.818 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.818 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:23.818 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.818 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.075 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.075 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:24.075 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:24.333 00:14:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:24.591 00:14:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:25.524 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:25.524 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:25.524 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.524 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.781 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.782 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:25.782 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.782 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.039 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.039 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.039 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.039 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:26.296 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.296 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.296 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.296 00:15:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.553 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.553 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.553 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.553 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.811 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.811 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:26.811 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.811 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.069 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.069 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:27.069 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:27.327 00:15:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:27.584 00:15:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.957 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.213 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.213 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.213 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.213 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.470 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.470 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.471 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.471 00:15:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.036 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.602 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.602 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:30.602 00:15:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.860 00:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:31.118 00:15:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:32.110 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:32.110 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:32.110 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.110 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:32.368 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.368 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:32.368 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.368 00:15:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.626 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.626 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.626 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.626 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.883 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.884 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.884 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.884 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.142 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.142 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:33.142 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.142 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.707 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.707 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:33.707 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.707 00:15:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.965 00:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.965 00:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:33.965 00:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:34.223 00:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:34.481 00:15:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:35.415 00:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:35.415 00:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:35.415 00:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.415 00:15:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.673 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.673 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:35.673 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.673 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.931 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.931 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.931 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.931 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.496 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.496 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.496 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.496 00:15:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.754 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.754 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:36.754 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.754 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:37.012 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.012 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:37.012 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.012 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:37.270 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.270 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:37.270 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:37.528 00:15:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:37.528 00:15:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.895 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:39.151 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.151 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:39.151 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.151 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:39.407 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.407 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:39.407 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.407 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:39.663 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.663 00:15:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:39.663 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.663 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:39.919 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.919 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:39.919 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.919 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:40.176 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.176 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:40.433 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:40.433 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:40.689 00:15:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:40.946 00:15:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:41.876 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:41.876 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:41.876 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.876 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:42.134 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.134 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:42.134 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.134 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:42.391 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.391 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:42.391 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.391 00:15:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:42.648 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.648 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:42.648 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.648 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:42.905 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.905 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:42.905 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.905 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:43.162 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.162 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:43.162 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.162 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:43.434 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.434 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:43.434 00:15:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:43.692 00:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.950 00:15:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:44.884 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:44.884 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:44.884 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.884 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:45.142 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.142 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:45.142 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.142 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.400 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.400 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.400 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.400 00:15:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.657 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.657 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.657 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.657 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:46.223 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.223 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:46.223 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.223 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.482 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.482 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:46.482 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.482 00:15:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.775 00:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.775 00:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:46.775 00:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:47.033 00:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:47.290 00:15:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:48.223 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:48.223 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:48.223 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.223 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.480 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.480 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:48.480 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.480 00:15:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.738 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.738 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.738 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.738 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.995 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.995 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.996 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.996 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:49.561 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.561 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:49.561 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.561 00:15:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.819 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.819 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.819 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.819 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.077 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.077 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:50.077 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:50.335 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:50.592 00:15:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:51.524 00:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:51.524 00:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:51.524 00:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.524 00:15:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.781 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.781 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:51.781 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.781 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.038 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.038 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.038 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.038 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:52.295 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.295 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:52.295 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.295 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.553 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.553 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.553 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.553 00:15:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.811 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.811 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:52.811 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.811 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1357781 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1357781 ']' 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1357781 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1357781 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1357781' 00:31:53.069 killing process with pid 1357781 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1357781 00:31:53.069 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1357781 00:31:53.069 Connection closed with partial response: 00:31:53.069 00:31:53.069 00:31:53.331 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1357781 00:31:53.331 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:53.331 [2024-07-16 00:14:51.692970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:53.331 [2024-07-16 00:14:51.693075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357781 ] 00:31:53.331 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.331 [2024-07-16 00:14:51.746289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.331 [2024-07-16 00:14:51.833336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.331 Running I/O for 90 seconds... 00:31:53.331 [2024-07-16 00:15:08.526157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.526935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.526959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.526976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.331 [2024-07-16 00:15:08.527288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:53.331 [2024-07-16 00:15:08.527606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.331 [2024-07-16 00:15:08.527623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.527949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.527966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.332 [2024-07-16 00:15:08.528861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.528952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.528989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.332 [2024-07-16 00:15:08.529666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:53.332 [2024-07-16 00:15:08.529694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.529970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.529987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.530956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.530974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.333 [2024-07-16 00:15:08.531433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:53.333 [2024-07-16 00:15:08.531464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.531483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.531832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.531881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.531936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.531968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.531986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:08.532695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.532812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.532865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.532919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.532954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.532972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.533007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.533025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.533060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.533084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:08.533121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:08.533150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:24.973033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.973101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.334 [2024-07-16 00:15:24.973124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.973161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:24.973182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.973208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:24.973225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:24.973266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:53.334 [2024-07-16 00:15:24.973302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.334 [2024-07-16 00:15:24.973321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.973979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.973997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.974021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.335 [2024-07-16 00:15:24.974038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.974063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.335 [2024-07-16 00:15:24.974081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.975932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.975959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.975989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.335 [2024-07-16 00:15:24.976486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:53.335 [2024-07-16 00:15:24.976511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.976827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.976869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.976911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.976953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.976978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.976994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.977389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.977431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.977472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.977514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:53.336 [2024-07-16 00:15:24.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:53.336 [2024-07-16 00:15:24.977623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:53.336 [2024-07-16 00:15:24.977640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:53.336 Received shutdown signal, test time was about 34.164861 seconds 00:31:53.336 00:31:53.336 Latency(us) 00:31:53.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.336 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:53.336 Verification LBA range: start 0x0 length 0x4000 00:31:53.336 Nvme0n1 : 34.16 7408.64 28.94 0.00 0.00 17244.43 1134.74 4026531.84 00:31:53.336 =================================================================================================================== 00:31:53.337 Total : 7408.64 28.94 0.00 0.00 17244.43 1134.74 4026531.84 00:31:53.337 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.595 rmmod nvme_tcp 00:31:53.595 rmmod nvme_fabrics 00:31:53.595 rmmod nvme_keyring 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1357569 ']' 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1357569 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1357569 ']' 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1357569 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:53.595 00:15:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1357569 00:31:53.595 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:53.595 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:53.595 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1357569' 00:31:53.595 killing process with pid 1357569 00:31:53.595 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1357569 00:31:53.595 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1357569 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:53.856 00:15:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.763 00:15:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.763 00:31:55.763 real 0m42.222s 00:31:55.763 user 2m10.879s 00:31:55.763 sys 0m9.823s 00:31:55.763 00:15:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:55.763 00:15:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:55.763 ************************************ 00:31:55.763 END TEST nvmf_host_multipath_status 00:31:55.763 ************************************ 00:31:55.763 00:15:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:55.763 00:15:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:55.763 00:15:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:55.763 00:15:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:55.763 ************************************ 00:31:55.764 START TEST nvmf_discovery_remove_ifc 00:31:55.764 ************************************ 00:31:55.764 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:56.022 * Looking for test storage... 00:31:56.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:56.022 00:15:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:57.925 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:57.926 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:57.926 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:57.926 Found net devices under 0000:08:00.0: cvl_0_0 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:57.926 Found net devices under 0000:08:00.1: cvl_0_1 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.926 00:15:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:57.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:31:57.926 00:31:57.926 --- 10.0.0.2 ping statistics --- 00:31:57.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.926 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:31:57.926 00:31:57.926 --- 10.0.0.1 ping statistics --- 00:31:57.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.926 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1363187 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1363187 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1363187 ']' 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:57.926 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.926 [2024-07-16 00:15:32.144709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:57.926 [2024-07-16 00:15:32.144798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.926 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.926 [2024-07-16 00:15:32.208046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.926 [2024-07-16 00:15:32.293893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.926 [2024-07-16 00:15:32.293948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.926 [2024-07-16 00:15:32.293965] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.926 [2024-07-16 00:15:32.293979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.926 [2024-07-16 00:15:32.293991] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.927 [2024-07-16 00:15:32.294018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.927 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.927 [2024-07-16 00:15:32.432679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.185 [2024-07-16 00:15:32.440846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:58.185 null0 00:31:58.185 [2024-07-16 00:15:32.472827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1363270 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1363270 /tmp/host.sock 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1363270 ']' 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:58.185 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:58.185 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.185 [2024-07-16 00:15:32.546358] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:58.185 [2024-07-16 00:15:32.546462] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363270 ] 00:31:58.185 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.185 [2024-07-16 00:15:32.607449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.185 [2024-07-16 00:15:32.695064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.444 00:15:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 [2024-07-16 00:15:33.951957] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:59.816 [2024-07-16 00:15:33.952003] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:59.816 [2024-07-16 00:15:33.952030] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:59.816 [2024-07-16 00:15:34.080424] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:59.816 [2024-07-16 00:15:34.180927] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:59.816 [2024-07-16 00:15:34.181005] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:59.816 [2024-07-16 00:15:34.181048] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:59.816 [2024-07-16 00:15:34.181080] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:59.816 [2024-07-16 00:15:34.181120] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:59.816 [2024-07-16 00:15:34.229812] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2495530 was disconnected and freed. delete nvme_qpair. 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.816 00:15:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.187 00:15:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.117 00:15:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.098 00:15:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.027 00:15:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.402 00:15:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.402 [2024-07-16 00:15:39.622422] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:05.402 [2024-07-16 00:15:39.622492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.402 [2024-07-16 00:15:39.622524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.402 [2024-07-16 00:15:39.622551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.402 [2024-07-16 00:15:39.622575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.402 [2024-07-16 00:15:39.622600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.402 [2024-07-16 00:15:39.622624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.402 [2024-07-16 00:15:39.622648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.402 [2024-07-16 00:15:39.622672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.402 [2024-07-16 00:15:39.622697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.402 [2024-07-16 00:15:39.622721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.402 [2024-07-16 00:15:39.622745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c560 is same with the state(5) to be set 00:32:05.402 [2024-07-16 00:15:39.632422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c560 (9): Bad file descriptor 00:32:05.402 [2024-07-16 00:15:39.642476] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.336 [2024-07-16 00:15:40.646208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:06.336 [2024-07-16 00:15:40.646284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c560 with addr=10.0.0.2, port=4420 00:32:06.336 [2024-07-16 00:15:40.646321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c560 is same with the state(5) to be set 00:32:06.336 [2024-07-16 00:15:40.646386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c560 (9): Bad file descriptor 00:32:06.336 [2024-07-16 00:15:40.646945] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:06.336 [2024-07-16 00:15:40.647000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:06.336 [2024-07-16 00:15:40.647030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:06.336 [2024-07-16 00:15:40.647057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:06.336 [2024-07-16 00:15:40.647107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:06.336 [2024-07-16 00:15:40.647150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.336 00:15:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.271 [2024-07-16 00:15:41.649662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:07.271 [2024-07-16 00:15:41.649693] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:07.271 [2024-07-16 00:15:41.649718] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:07.271 [2024-07-16 00:15:41.649742] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:07.271 [2024-07-16 00:15:41.649778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:07.271 [2024-07-16 00:15:41.649832] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:07.271 [2024-07-16 00:15:41.649881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.271 [2024-07-16 00:15:41.649913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.271 [2024-07-16 00:15:41.649944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.271 [2024-07-16 00:15:41.649971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.271 [2024-07-16 00:15:41.649997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.271 [2024-07-16 00:15:41.650020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.271 [2024-07-16 00:15:41.650045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.271 [2024-07-16 00:15:41.650072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.271 [2024-07-16 00:15:41.650099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:07.271 [2024-07-16 00:15:41.650123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:07.271 [2024-07-16 00:15:41.650158] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:07.271 [2024-07-16 00:15:41.650224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245b9f0 (9): Bad file descriptor 00:32:07.271 [2024-07-16 00:15:41.651214] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:07.271 [2024-07-16 00:15:41.651243] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.271 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:07.272 00:15:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:08.642 00:15:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.205 [2024-07-16 00:15:43.706282] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:09.205 [2024-07-16 00:15:43.706313] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:09.205 [2024-07-16 00:15:43.706338] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:09.462 [2024-07-16 00:15:43.792616] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.462 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.463 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.463 [2024-07-16 00:15:43.856183] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:09.463 [2024-07-16 00:15:43.856238] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:09.463 [2024-07-16 00:15:43.856275] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:09.463 [2024-07-16 00:15:43.856302] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:09.463 [2024-07-16 00:15:43.856316] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:09.463 [2024-07-16 00:15:43.864478] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x246b9e0 was disconnected and freed. delete nvme_qpair. 00:32:09.463 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:09.463 00:15:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.394 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1363270 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1363270 ']' 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1363270 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1363270 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1363270' 00:32:10.652 killing process with pid 1363270 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1363270 00:32:10.652 00:15:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1363270 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.652 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.652 rmmod nvme_tcp 00:32:10.652 rmmod nvme_fabrics 00:32:10.652 rmmod nvme_keyring 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1363187 ']' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1363187 ']' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1363187' 00:32:10.911 killing process with pid 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1363187 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:10.911 00:15:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.445 00:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:13.445 00:32:13.445 real 0m17.130s 00:32:13.445 user 0m25.407s 00:32:13.445 sys 0m2.666s 00:32:13.445 00:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:13.445 00:15:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.445 ************************************ 00:32:13.445 END TEST nvmf_discovery_remove_ifc 00:32:13.445 ************************************ 00:32:13.445 00:15:47 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:13.445 00:15:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:13.445 00:15:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:13.445 00:15:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:13.445 ************************************ 00:32:13.445 START TEST nvmf_identify_kernel_target 00:32:13.445 ************************************ 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:13.445 * Looking for test storage... 00:32:13.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:13.445 00:15:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:14.826 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:14.826 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:14.826 Found net devices under 0000:08:00.0: cvl_0_0 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.826 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:14.827 Found net devices under 0000:08:00.1: cvl_0_1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:14.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:14.827 00:32:14.827 --- 10.0.0.2 ping statistics --- 00:32:14.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.827 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:32:14.827 00:32:14.827 --- 10.0.0.1 ping statistics --- 00:32:14.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.827 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:14.827 00:15:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.765 Waiting for block devices as requested 00:32:15.765 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:32:15.765 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:15.765 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:16.025 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:16.025 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:16.025 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:16.025 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:16.283 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:16.283 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:16.283 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:16.283 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:16.541 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:16.541 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:16.541 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:16.799 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:16.799 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:16.799 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:16.799 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.057 No valid GPT data, bailing 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:32:17.057 00:32:17.057 Discovery Log Number of Records 2, Generation counter 2 00:32:17.057 =====Discovery Log Entry 0====== 00:32:17.057 trtype: tcp 00:32:17.057 adrfam: ipv4 00:32:17.057 subtype: current discovery subsystem 00:32:17.057 treq: not specified, sq flow control disable supported 00:32:17.057 portid: 1 00:32:17.057 trsvcid: 4420 00:32:17.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:17.057 traddr: 10.0.0.1 00:32:17.057 eflags: none 00:32:17.057 sectype: none 00:32:17.057 =====Discovery Log Entry 1====== 00:32:17.057 trtype: tcp 00:32:17.057 adrfam: ipv4 00:32:17.057 subtype: nvme subsystem 00:32:17.057 treq: not specified, sq flow control disable supported 00:32:17.057 portid: 1 00:32:17.057 trsvcid: 4420 00:32:17.057 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:17.057 traddr: 10.0.0.1 00:32:17.057 eflags: none 00:32:17.057 sectype: none 00:32:17.057 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:17.057 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:17.057 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.057 ===================================================== 00:32:17.057 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:17.057 ===================================================== 00:32:17.057 Controller Capabilities/Features 00:32:17.057 ================================ 00:32:17.057 Vendor ID: 0000 00:32:17.057 Subsystem Vendor ID: 0000 00:32:17.057 Serial Number: d6a5ef6e94afad320aac 00:32:17.057 Model Number: Linux 00:32:17.057 Firmware Version: 6.7.0-68 00:32:17.057 Recommended Arb Burst: 0 00:32:17.057 IEEE OUI Identifier: 00 00 00 00:32:17.057 Multi-path I/O 00:32:17.057 May have multiple subsystem ports: No 00:32:17.057 May have multiple controllers: No 00:32:17.057 Associated with SR-IOV VF: No 00:32:17.057 Max Data Transfer Size: Unlimited 00:32:17.057 Max Number of Namespaces: 0 00:32:17.057 Max Number of I/O Queues: 1024 00:32:17.057 NVMe Specification Version (VS): 1.3 00:32:17.057 NVMe Specification Version (Identify): 1.3 00:32:17.057 Maximum Queue Entries: 1024 00:32:17.057 Contiguous Queues Required: No 00:32:17.057 Arbitration Mechanisms Supported 00:32:17.057 Weighted Round Robin: Not Supported 00:32:17.057 Vendor Specific: Not Supported 00:32:17.057 Reset Timeout: 7500 ms 00:32:17.057 Doorbell Stride: 4 bytes 00:32:17.057 NVM Subsystem Reset: Not Supported 00:32:17.057 Command Sets Supported 00:32:17.057 NVM Command Set: Supported 00:32:17.057 Boot Partition: Not Supported 00:32:17.057 Memory Page Size Minimum: 4096 bytes 00:32:17.057 Memory Page Size Maximum: 4096 bytes 00:32:17.057 Persistent Memory Region: Not Supported 00:32:17.057 Optional Asynchronous Events Supported 00:32:17.057 Namespace Attribute Notices: Not Supported 00:32:17.057 Firmware Activation Notices: Not Supported 00:32:17.057 ANA Change Notices: Not Supported 00:32:17.057 PLE Aggregate Log Change Notices: Not Supported 00:32:17.057 LBA Status Info Alert Notices: Not Supported 00:32:17.057 EGE Aggregate Log Change Notices: Not Supported 00:32:17.057 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.057 Zone Descriptor Change Notices: Not Supported 00:32:17.058 Discovery Log Change Notices: Supported 00:32:17.058 Controller Attributes 00:32:17.058 128-bit Host Identifier: Not Supported 00:32:17.058 Non-Operational Permissive Mode: Not Supported 00:32:17.058 NVM Sets: Not Supported 00:32:17.058 Read Recovery Levels: Not Supported 00:32:17.058 Endurance Groups: Not Supported 00:32:17.058 Predictable Latency Mode: Not Supported 00:32:17.058 Traffic Based Keep ALive: Not Supported 00:32:17.058 Namespace Granularity: Not Supported 00:32:17.058 SQ Associations: Not Supported 00:32:17.058 UUID List: Not Supported 00:32:17.058 Multi-Domain Subsystem: Not Supported 00:32:17.058 Fixed Capacity Management: Not Supported 00:32:17.058 Variable Capacity Management: Not Supported 00:32:17.058 Delete Endurance Group: Not Supported 00:32:17.058 Delete NVM Set: Not Supported 00:32:17.058 Extended LBA Formats Supported: Not Supported 00:32:17.058 Flexible Data Placement Supported: Not Supported 00:32:17.058 00:32:17.058 Controller Memory Buffer Support 00:32:17.058 ================================ 00:32:17.058 Supported: No 00:32:17.058 00:32:17.058 Persistent Memory Region Support 00:32:17.058 ================================ 00:32:17.058 Supported: No 00:32:17.058 00:32:17.058 Admin Command Set Attributes 00:32:17.058 ============================ 00:32:17.058 Security Send/Receive: Not Supported 00:32:17.058 Format NVM: Not Supported 00:32:17.058 Firmware Activate/Download: Not Supported 00:32:17.058 Namespace Management: Not Supported 00:32:17.058 Device Self-Test: Not Supported 00:32:17.058 Directives: Not Supported 00:32:17.058 NVMe-MI: Not Supported 00:32:17.058 Virtualization Management: Not Supported 00:32:17.058 Doorbell Buffer Config: Not Supported 00:32:17.058 Get LBA Status Capability: Not Supported 00:32:17.058 Command & Feature Lockdown Capability: Not Supported 00:32:17.058 Abort Command Limit: 1 00:32:17.058 Async Event Request Limit: 1 00:32:17.058 Number of Firmware Slots: N/A 00:32:17.058 Firmware Slot 1 Read-Only: N/A 00:32:17.058 Firmware Activation Without Reset: N/A 00:32:17.058 Multiple Update Detection Support: N/A 00:32:17.058 Firmware Update Granularity: No Information Provided 00:32:17.058 Per-Namespace SMART Log: No 00:32:17.058 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.058 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:17.058 Command Effects Log Page: Not Supported 00:32:17.058 Get Log Page Extended Data: Supported 00:32:17.058 Telemetry Log Pages: Not Supported 00:32:17.058 Persistent Event Log Pages: Not Supported 00:32:17.058 Supported Log Pages Log Page: May Support 00:32:17.058 Commands Supported & Effects Log Page: Not Supported 00:32:17.058 Feature Identifiers & Effects Log Page:May Support 00:32:17.058 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.058 Data Area 4 for Telemetry Log: Not Supported 00:32:17.058 Error Log Page Entries Supported: 1 00:32:17.058 Keep Alive: Not Supported 00:32:17.058 00:32:17.058 NVM Command Set Attributes 00:32:17.058 ========================== 00:32:17.058 Submission Queue Entry Size 00:32:17.058 Max: 1 00:32:17.058 Min: 1 00:32:17.058 Completion Queue Entry Size 00:32:17.058 Max: 1 00:32:17.058 Min: 1 00:32:17.058 Number of Namespaces: 0 00:32:17.058 Compare Command: Not Supported 00:32:17.058 Write Uncorrectable Command: Not Supported 00:32:17.058 Dataset Management Command: Not Supported 00:32:17.058 Write Zeroes Command: Not Supported 00:32:17.058 Set Features Save Field: Not Supported 00:32:17.058 Reservations: Not Supported 00:32:17.058 Timestamp: Not Supported 00:32:17.058 Copy: Not Supported 00:32:17.058 Volatile Write Cache: Not Present 00:32:17.058 Atomic Write Unit (Normal): 1 00:32:17.058 Atomic Write Unit (PFail): 1 00:32:17.058 Atomic Compare & Write Unit: 1 00:32:17.058 Fused Compare & Write: Not Supported 00:32:17.058 Scatter-Gather List 00:32:17.058 SGL Command Set: Supported 00:32:17.058 SGL Keyed: Not Supported 00:32:17.058 SGL Bit Bucket Descriptor: Not Supported 00:32:17.058 SGL Metadata Pointer: Not Supported 00:32:17.058 Oversized SGL: Not Supported 00:32:17.058 SGL Metadata Address: Not Supported 00:32:17.058 SGL Offset: Supported 00:32:17.058 Transport SGL Data Block: Not Supported 00:32:17.058 Replay Protected Memory Block: Not Supported 00:32:17.058 00:32:17.058 Firmware Slot Information 00:32:17.058 ========================= 00:32:17.058 Active slot: 0 00:32:17.058 00:32:17.058 00:32:17.058 Error Log 00:32:17.058 ========= 00:32:17.058 00:32:17.058 Active Namespaces 00:32:17.058 ================= 00:32:17.058 Discovery Log Page 00:32:17.058 ================== 00:32:17.058 Generation Counter: 2 00:32:17.058 Number of Records: 2 00:32:17.058 Record Format: 0 00:32:17.058 00:32:17.058 Discovery Log Entry 0 00:32:17.058 ---------------------- 00:32:17.058 Transport Type: 3 (TCP) 00:32:17.058 Address Family: 1 (IPv4) 00:32:17.058 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:17.058 Entry Flags: 00:32:17.058 Duplicate Returned Information: 0 00:32:17.058 Explicit Persistent Connection Support for Discovery: 0 00:32:17.058 Transport Requirements: 00:32:17.058 Secure Channel: Not Specified 00:32:17.058 Port ID: 1 (0x0001) 00:32:17.058 Controller ID: 65535 (0xffff) 00:32:17.058 Admin Max SQ Size: 32 00:32:17.058 Transport Service Identifier: 4420 00:32:17.058 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:17.058 Transport Address: 10.0.0.1 00:32:17.058 Discovery Log Entry 1 00:32:17.058 ---------------------- 00:32:17.058 Transport Type: 3 (TCP) 00:32:17.058 Address Family: 1 (IPv4) 00:32:17.058 Subsystem Type: 2 (NVM Subsystem) 00:32:17.058 Entry Flags: 00:32:17.058 Duplicate Returned Information: 0 00:32:17.058 Explicit Persistent Connection Support for Discovery: 0 00:32:17.058 Transport Requirements: 00:32:17.058 Secure Channel: Not Specified 00:32:17.058 Port ID: 1 (0x0001) 00:32:17.058 Controller ID: 65535 (0xffff) 00:32:17.058 Admin Max SQ Size: 32 00:32:17.058 Transport Service Identifier: 4420 00:32:17.058 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:17.058 Transport Address: 10.0.0.1 00:32:17.058 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.317 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.317 get_feature(0x01) failed 00:32:17.317 get_feature(0x02) failed 00:32:17.317 get_feature(0x04) failed 00:32:17.317 ===================================================== 00:32:17.317 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.317 ===================================================== 00:32:17.317 Controller Capabilities/Features 00:32:17.317 ================================ 00:32:17.317 Vendor ID: 0000 00:32:17.317 Subsystem Vendor ID: 0000 00:32:17.317 Serial Number: 43558fc8615d8a4afbdf 00:32:17.317 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:17.317 Firmware Version: 6.7.0-68 00:32:17.317 Recommended Arb Burst: 6 00:32:17.317 IEEE OUI Identifier: 00 00 00 00:32:17.317 Multi-path I/O 00:32:17.317 May have multiple subsystem ports: Yes 00:32:17.317 May have multiple controllers: Yes 00:32:17.317 Associated with SR-IOV VF: No 00:32:17.317 Max Data Transfer Size: Unlimited 00:32:17.317 Max Number of Namespaces: 1024 00:32:17.317 Max Number of I/O Queues: 128 00:32:17.317 NVMe Specification Version (VS): 1.3 00:32:17.317 NVMe Specification Version (Identify): 1.3 00:32:17.317 Maximum Queue Entries: 1024 00:32:17.317 Contiguous Queues Required: No 00:32:17.318 Arbitration Mechanisms Supported 00:32:17.318 Weighted Round Robin: Not Supported 00:32:17.318 Vendor Specific: Not Supported 00:32:17.318 Reset Timeout: 7500 ms 00:32:17.318 Doorbell Stride: 4 bytes 00:32:17.318 NVM Subsystem Reset: Not Supported 00:32:17.318 Command Sets Supported 00:32:17.318 NVM Command Set: Supported 00:32:17.318 Boot Partition: Not Supported 00:32:17.318 Memory Page Size Minimum: 4096 bytes 00:32:17.318 Memory Page Size Maximum: 4096 bytes 00:32:17.318 Persistent Memory Region: Not Supported 00:32:17.318 Optional Asynchronous Events Supported 00:32:17.318 Namespace Attribute Notices: Supported 00:32:17.318 Firmware Activation Notices: Not Supported 00:32:17.318 ANA Change Notices: Supported 00:32:17.318 PLE Aggregate Log Change Notices: Not Supported 00:32:17.318 LBA Status Info Alert Notices: Not Supported 00:32:17.318 EGE Aggregate Log Change Notices: Not Supported 00:32:17.318 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.318 Zone Descriptor Change Notices: Not Supported 00:32:17.318 Discovery Log Change Notices: Not Supported 00:32:17.318 Controller Attributes 00:32:17.318 128-bit Host Identifier: Supported 00:32:17.318 Non-Operational Permissive Mode: Not Supported 00:32:17.318 NVM Sets: Not Supported 00:32:17.318 Read Recovery Levels: Not Supported 00:32:17.318 Endurance Groups: Not Supported 00:32:17.318 Predictable Latency Mode: Not Supported 00:32:17.318 Traffic Based Keep ALive: Supported 00:32:17.318 Namespace Granularity: Not Supported 00:32:17.318 SQ Associations: Not Supported 00:32:17.318 UUID List: Not Supported 00:32:17.318 Multi-Domain Subsystem: Not Supported 00:32:17.318 Fixed Capacity Management: Not Supported 00:32:17.318 Variable Capacity Management: Not Supported 00:32:17.318 Delete Endurance Group: Not Supported 00:32:17.318 Delete NVM Set: Not Supported 00:32:17.318 Extended LBA Formats Supported: Not Supported 00:32:17.318 Flexible Data Placement Supported: Not Supported 00:32:17.318 00:32:17.318 Controller Memory Buffer Support 00:32:17.318 ================================ 00:32:17.318 Supported: No 00:32:17.318 00:32:17.318 Persistent Memory Region Support 00:32:17.318 ================================ 00:32:17.318 Supported: No 00:32:17.318 00:32:17.318 Admin Command Set Attributes 00:32:17.318 ============================ 00:32:17.318 Security Send/Receive: Not Supported 00:32:17.318 Format NVM: Not Supported 00:32:17.318 Firmware Activate/Download: Not Supported 00:32:17.318 Namespace Management: Not Supported 00:32:17.318 Device Self-Test: Not Supported 00:32:17.318 Directives: Not Supported 00:32:17.318 NVMe-MI: Not Supported 00:32:17.318 Virtualization Management: Not Supported 00:32:17.318 Doorbell Buffer Config: Not Supported 00:32:17.318 Get LBA Status Capability: Not Supported 00:32:17.318 Command & Feature Lockdown Capability: Not Supported 00:32:17.318 Abort Command Limit: 4 00:32:17.318 Async Event Request Limit: 4 00:32:17.318 Number of Firmware Slots: N/A 00:32:17.318 Firmware Slot 1 Read-Only: N/A 00:32:17.318 Firmware Activation Without Reset: N/A 00:32:17.318 Multiple Update Detection Support: N/A 00:32:17.318 Firmware Update Granularity: No Information Provided 00:32:17.318 Per-Namespace SMART Log: Yes 00:32:17.318 Asymmetric Namespace Access Log Page: Supported 00:32:17.318 ANA Transition Time : 10 sec 00:32:17.318 00:32:17.318 Asymmetric Namespace Access Capabilities 00:32:17.318 ANA Optimized State : Supported 00:32:17.318 ANA Non-Optimized State : Supported 00:32:17.318 ANA Inaccessible State : Supported 00:32:17.318 ANA Persistent Loss State : Supported 00:32:17.318 ANA Change State : Supported 00:32:17.318 ANAGRPID is not changed : No 00:32:17.318 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:17.318 00:32:17.318 ANA Group Identifier Maximum : 128 00:32:17.318 Number of ANA Group Identifiers : 128 00:32:17.318 Max Number of Allowed Namespaces : 1024 00:32:17.318 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:17.318 Command Effects Log Page: Supported 00:32:17.318 Get Log Page Extended Data: Supported 00:32:17.318 Telemetry Log Pages: Not Supported 00:32:17.318 Persistent Event Log Pages: Not Supported 00:32:17.318 Supported Log Pages Log Page: May Support 00:32:17.318 Commands Supported & Effects Log Page: Not Supported 00:32:17.318 Feature Identifiers & Effects Log Page:May Support 00:32:17.318 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.318 Data Area 4 for Telemetry Log: Not Supported 00:32:17.318 Error Log Page Entries Supported: 128 00:32:17.318 Keep Alive: Supported 00:32:17.318 Keep Alive Granularity: 1000 ms 00:32:17.318 00:32:17.318 NVM Command Set Attributes 00:32:17.318 ========================== 00:32:17.318 Submission Queue Entry Size 00:32:17.318 Max: 64 00:32:17.318 Min: 64 00:32:17.318 Completion Queue Entry Size 00:32:17.318 Max: 16 00:32:17.318 Min: 16 00:32:17.318 Number of Namespaces: 1024 00:32:17.318 Compare Command: Not Supported 00:32:17.318 Write Uncorrectable Command: Not Supported 00:32:17.318 Dataset Management Command: Supported 00:32:17.318 Write Zeroes Command: Supported 00:32:17.318 Set Features Save Field: Not Supported 00:32:17.318 Reservations: Not Supported 00:32:17.318 Timestamp: Not Supported 00:32:17.318 Copy: Not Supported 00:32:17.318 Volatile Write Cache: Present 00:32:17.318 Atomic Write Unit (Normal): 1 00:32:17.318 Atomic Write Unit (PFail): 1 00:32:17.318 Atomic Compare & Write Unit: 1 00:32:17.318 Fused Compare & Write: Not Supported 00:32:17.318 Scatter-Gather List 00:32:17.318 SGL Command Set: Supported 00:32:17.318 SGL Keyed: Not Supported 00:32:17.318 SGL Bit Bucket Descriptor: Not Supported 00:32:17.318 SGL Metadata Pointer: Not Supported 00:32:17.318 Oversized SGL: Not Supported 00:32:17.318 SGL Metadata Address: Not Supported 00:32:17.318 SGL Offset: Supported 00:32:17.318 Transport SGL Data Block: Not Supported 00:32:17.318 Replay Protected Memory Block: Not Supported 00:32:17.318 00:32:17.318 Firmware Slot Information 00:32:17.318 ========================= 00:32:17.318 Active slot: 0 00:32:17.318 00:32:17.318 Asymmetric Namespace Access 00:32:17.318 =========================== 00:32:17.318 Change Count : 0 00:32:17.318 Number of ANA Group Descriptors : 1 00:32:17.318 ANA Group Descriptor : 0 00:32:17.318 ANA Group ID : 1 00:32:17.318 Number of NSID Values : 1 00:32:17.318 Change Count : 0 00:32:17.318 ANA State : 1 00:32:17.318 Namespace Identifier : 1 00:32:17.318 00:32:17.318 Commands Supported and Effects 00:32:17.318 ============================== 00:32:17.318 Admin Commands 00:32:17.318 -------------- 00:32:17.318 Get Log Page (02h): Supported 00:32:17.318 Identify (06h): Supported 00:32:17.318 Abort (08h): Supported 00:32:17.318 Set Features (09h): Supported 00:32:17.318 Get Features (0Ah): Supported 00:32:17.318 Asynchronous Event Request (0Ch): Supported 00:32:17.318 Keep Alive (18h): Supported 00:32:17.318 I/O Commands 00:32:17.318 ------------ 00:32:17.318 Flush (00h): Supported 00:32:17.318 Write (01h): Supported LBA-Change 00:32:17.318 Read (02h): Supported 00:32:17.318 Write Zeroes (08h): Supported LBA-Change 00:32:17.318 Dataset Management (09h): Supported 00:32:17.318 00:32:17.318 Error Log 00:32:17.318 ========= 00:32:17.318 Entry: 0 00:32:17.318 Error Count: 0x3 00:32:17.318 Submission Queue Id: 0x0 00:32:17.318 Command Id: 0x5 00:32:17.318 Phase Bit: 0 00:32:17.318 Status Code: 0x2 00:32:17.318 Status Code Type: 0x0 00:32:17.318 Do Not Retry: 1 00:32:17.318 Error Location: 0x28 00:32:17.318 LBA: 0x0 00:32:17.318 Namespace: 0x0 00:32:17.318 Vendor Log Page: 0x0 00:32:17.318 ----------- 00:32:17.318 Entry: 1 00:32:17.318 Error Count: 0x2 00:32:17.318 Submission Queue Id: 0x0 00:32:17.318 Command Id: 0x5 00:32:17.318 Phase Bit: 0 00:32:17.318 Status Code: 0x2 00:32:17.318 Status Code Type: 0x0 00:32:17.318 Do Not Retry: 1 00:32:17.318 Error Location: 0x28 00:32:17.318 LBA: 0x0 00:32:17.318 Namespace: 0x0 00:32:17.318 Vendor Log Page: 0x0 00:32:17.318 ----------- 00:32:17.318 Entry: 2 00:32:17.318 Error Count: 0x1 00:32:17.318 Submission Queue Id: 0x0 00:32:17.318 Command Id: 0x4 00:32:17.318 Phase Bit: 0 00:32:17.318 Status Code: 0x2 00:32:17.318 Status Code Type: 0x0 00:32:17.318 Do Not Retry: 1 00:32:17.318 Error Location: 0x28 00:32:17.318 LBA: 0x0 00:32:17.318 Namespace: 0x0 00:32:17.318 Vendor Log Page: 0x0 00:32:17.318 00:32:17.318 Number of Queues 00:32:17.318 ================ 00:32:17.318 Number of I/O Submission Queues: 128 00:32:17.318 Number of I/O Completion Queues: 128 00:32:17.318 00:32:17.318 ZNS Specific Controller Data 00:32:17.318 ============================ 00:32:17.318 Zone Append Size Limit: 0 00:32:17.318 00:32:17.318 00:32:17.319 Active Namespaces 00:32:17.319 ================= 00:32:17.319 get_feature(0x05) failed 00:32:17.319 Namespace ID:1 00:32:17.319 Command Set Identifier: NVM (00h) 00:32:17.319 Deallocate: Supported 00:32:17.319 Deallocated/Unwritten Error: Not Supported 00:32:17.319 Deallocated Read Value: Unknown 00:32:17.319 Deallocate in Write Zeroes: Not Supported 00:32:17.319 Deallocated Guard Field: 0xFFFF 00:32:17.319 Flush: Supported 00:32:17.319 Reservation: Not Supported 00:32:17.319 Namespace Sharing Capabilities: Multiple Controllers 00:32:17.319 Size (in LBAs): 1953525168 (931GiB) 00:32:17.319 Capacity (in LBAs): 1953525168 (931GiB) 00:32:17.319 Utilization (in LBAs): 1953525168 (931GiB) 00:32:17.319 UUID: 82be2550-3614-4405-9d6d-48626d1b851e 00:32:17.319 Thin Provisioning: Not Supported 00:32:17.319 Per-NS Atomic Units: Yes 00:32:17.319 Atomic Boundary Size (Normal): 0 00:32:17.319 Atomic Boundary Size (PFail): 0 00:32:17.319 Atomic Boundary Offset: 0 00:32:17.319 NGUID/EUI64 Never Reused: No 00:32:17.319 ANA group ID: 1 00:32:17.319 Namespace Write Protected: No 00:32:17.319 Number of LBA Formats: 1 00:32:17.319 Current LBA Format: LBA Format #00 00:32:17.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:17.319 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:17.319 rmmod nvme_tcp 00:32:17.319 rmmod nvme_fabrics 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.319 00:15:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.224 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:19.481 00:15:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.475 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:32:20.475 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:32:20.475 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:32:21.410 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:32:21.410 00:32:21.410 real 0m8.381s 00:32:21.410 user 0m1.659s 00:32:21.410 sys 0m2.842s 00:32:21.410 00:15:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:21.410 00:15:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.410 ************************************ 00:32:21.410 END TEST nvmf_identify_kernel_target 00:32:21.410 ************************************ 00:32:21.410 00:15:55 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:21.410 00:15:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:21.410 00:15:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:21.410 00:15:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.410 ************************************ 00:32:21.410 START TEST nvmf_auth_host 00:32:21.410 ************************************ 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:21.410 * Looking for test storage... 00:32:21.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:21.410 00:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:23.316 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:23.316 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:23.316 Found net devices under 0000:08:00.0: cvl_0_0 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:23.316 Found net devices under 0000:08:00.1: cvl_0_1 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:32:23.316 00:32:23.316 --- 10.0.0.2 ping statistics --- 00:32:23.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.316 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:23.316 00:32:23.316 --- 10.0.0.1 ping statistics --- 00:32:23.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.316 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1368594 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1368594 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1368594 ']' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:23.316 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40e90343c553c4addd4998a066c1ca81 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fHO 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40e90343c553c4addd4998a066c1ca81 0 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40e90343c553c4addd4998a066c1ca81 0 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40e90343c553c4addd4998a066c1ca81 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:23.575 00:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fHO 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fHO 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fHO 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ddd3cece35d107529088e82369de54fe79fc9e586f6d366c41eeed3cc2d6f87d 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lDW 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ddd3cece35d107529088e82369de54fe79fc9e586f6d366c41eeed3cc2d6f87d 3 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ddd3cece35d107529088e82369de54fe79fc9e586f6d366c41eeed3cc2d6f87d 3 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ddd3cece35d107529088e82369de54fe79fc9e586f6d366c41eeed3cc2d6f87d 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:23.575 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.834 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lDW 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lDW 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lDW 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33f4e57443a6a98fb9446ec4487be8607c16ee84e465ba5a 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4zO 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33f4e57443a6a98fb9446ec4487be8607c16ee84e465ba5a 0 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33f4e57443a6a98fb9446ec4487be8607c16ee84e465ba5a 0 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33f4e57443a6a98fb9446ec4487be8607c16ee84e465ba5a 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4zO 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4zO 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4zO 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de6e77c222d03460edded501a49a0bbf75a4d80a82167160 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CbC 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de6e77c222d03460edded501a49a0bbf75a4d80a82167160 2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de6e77c222d03460edded501a49a0bbf75a4d80a82167160 2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de6e77c222d03460edded501a49a0bbf75a4d80a82167160 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CbC 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CbC 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CbC 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4464cfcf07d6b126064fc30968cf7b2f 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.E0y 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4464cfcf07d6b126064fc30968cf7b2f 1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4464cfcf07d6b126064fc30968cf7b2f 1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4464cfcf07d6b126064fc30968cf7b2f 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.E0y 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.E0y 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.E0y 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b43e0a54220ab54d986bd2fe4bd67367 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HyR 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b43e0a54220ab54d986bd2fe4bd67367 1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b43e0a54220ab54d986bd2fe4bd67367 1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b43e0a54220ab54d986bd2fe4bd67367 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HyR 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HyR 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HyR 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=289de8ecd4377949d1f69cec4b3129176f5d7ebeb0b83f08 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8WH 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 289de8ecd4377949d1f69cec4b3129176f5d7ebeb0b83f08 2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 289de8ecd4377949d1f69cec4b3129176f5d7ebeb0b83f08 2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=289de8ecd4377949d1f69cec4b3129176f5d7ebeb0b83f08 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:23.835 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8WH 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8WH 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8WH 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6a86cb641b3b50d9e5ff007f821ff6b3 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6xa 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6a86cb641b3b50d9e5ff007f821ff6b3 0 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6a86cb641b3b50d9e5ff007f821ff6b3 0 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6a86cb641b3b50d9e5ff007f821ff6b3 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6xa 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6xa 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.6xa 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0ed96d3b469666af0b13d33fb1459f4c027c0f91f6b3358915a0b58f8c00d534 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Y8 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0ed96d3b469666af0b13d33fb1459f4c027c0f91f6b3358915a0b58f8c00d534 3 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0ed96d3b469666af0b13d33fb1459f4c027c0f91f6b3358915a0b58f8c00d534 3 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0ed96d3b469666af0b13d33fb1459f4c027c0f91f6b3358915a0b58f8c00d534 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Y8 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Y8 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2Y8 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1368594 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1368594 ']' 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:24.094 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.352 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fHO 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lDW ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lDW 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4zO 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CbC ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CbC 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.E0y 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HyR ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HyR 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8WH 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.6xa ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.6xa 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Y8 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:24.353 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:24.610 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:24.610 00:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.544 Waiting for block devices as requested 00:32:25.544 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.544 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:25.544 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:25.544 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:25.802 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:25.802 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:25.802 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:26.060 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:26.060 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:26.060 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:32:26.060 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:32:26.318 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:32:26.318 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:32:26.318 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:32:26.318 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:32:26.318 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:32:26.576 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:26.834 No valid GPT data, bailing 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:26.834 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:32:27.092 00:32:27.092 Discovery Log Number of Records 2, Generation counter 2 00:32:27.093 =====Discovery Log Entry 0====== 00:32:27.093 trtype: tcp 00:32:27.093 adrfam: ipv4 00:32:27.093 subtype: current discovery subsystem 00:32:27.093 treq: not specified, sq flow control disable supported 00:32:27.093 portid: 1 00:32:27.093 trsvcid: 4420 00:32:27.093 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:27.093 traddr: 10.0.0.1 00:32:27.093 eflags: none 00:32:27.093 sectype: none 00:32:27.093 =====Discovery Log Entry 1====== 00:32:27.093 trtype: tcp 00:32:27.093 adrfam: ipv4 00:32:27.093 subtype: nvme subsystem 00:32:27.093 treq: not specified, sq flow control disable supported 00:32:27.093 portid: 1 00:32:27.093 trsvcid: 4420 00:32:27.093 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:27.093 traddr: 10.0.0.1 00:32:27.093 eflags: none 00:32:27.093 sectype: none 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.093 nvme0n1 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.093 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.351 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.352 nvme0n1 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.352 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.610 00:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.610 nvme0n1 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.610 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 nvme0n1 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.869 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.870 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.128 nvme0n1 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.128 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.387 nvme0n1 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:28.387 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.388 00:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.647 nvme0n1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.647 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.912 nvme0n1 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.912 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.175 nvme0n1 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.175 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.176 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.176 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.176 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.176 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.176 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.433 nvme0n1 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:29.433 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.434 00:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.692 nvme0n1 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.692 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.258 nvme0n1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.258 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.259 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.517 nvme0n1 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.517 00:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.775 nvme0n1 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.775 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.034 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 nvme0n1 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.292 00:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.293 00:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.293 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.293 00:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.550 nvme0n1 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.550 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.807 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.373 nvme0n1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.373 00:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.939 nvme0n1 00:32:32.939 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.939 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.939 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.939 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.939 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.196 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.196 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.197 00:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 nvme0n1 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.763 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.328 nvme0n1 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.328 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:34.585 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.586 00:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.188 nvme0n1 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.188 00:16:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.561 nvme0n1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.561 00:16:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.495 nvme0n1 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:37.495 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.496 00:16:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.880 nvme0n1 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.880 00:16:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 nvme0n1 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.815 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.072 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.073 00:16:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.001 nvme0n1 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.001 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.259 nvme0n1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.259 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.554 nvme0n1 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.554 00:16:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 nvme0n1 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.812 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.070 nvme0n1 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.070 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.328 nvme0n1 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.328 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.585 nvme0n1 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.585 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.586 00:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.842 nvme0n1 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.843 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 nvme0n1 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.100 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.101 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.358 nvme0n1 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.358 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.359 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.617 nvme0n1 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.617 00:16:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:43.617 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.618 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.876 nvme0n1 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.876 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.134 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.135 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.392 nvme0n1 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.392 00:16:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.648 nvme0n1 00:32:44.648 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.648 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.648 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.648 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.648 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.904 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.904 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.904 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.904 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.905 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.163 nvme0n1 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.163 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.422 nvme0n1 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.422 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.680 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.681 00:16:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 nvme0n1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.247 00:16:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.812 nvme0n1 00:32:46.812 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.812 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.812 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.812 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.812 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.071 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.072 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.637 nvme0n1 00:32:47.637 00:16:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.637 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.638 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.203 nvme0n1 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.203 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.462 00:16:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.040 nvme0n1 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.040 00:16:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.411 nvme0n1 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.411 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.412 00:16:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.412 00:16:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.412 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.412 00:16:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.406 nvme0n1 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.406 00:16:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.780 nvme0n1 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.780 00:16:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.780 00:16:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.714 nvme0n1 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.714 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.972 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.973 00:16:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 nvme0n1 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 nvme0n1 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:55.346 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.347 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.605 nvme0n1 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.605 00:16:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 nvme0n1 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 nvme0n1 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.864 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.122 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.123 nvme0n1 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.123 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.381 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.381 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.381 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.381 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.381 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.382 nvme0n1 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.382 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.640 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.641 00:16:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.641 nvme0n1 00:32:56.641 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.641 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.641 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.641 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.641 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.899 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.900 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.158 nvme0n1 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.158 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.417 nvme0n1 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.417 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.676 nvme0n1 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.676 00:16:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.676 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.934 nvme0n1 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.934 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.935 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 nvme0n1 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.502 00:16:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.761 nvme0n1 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.761 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.762 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.327 nvme0n1 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.327 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.584 nvme0n1 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.584 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.585 00:16:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.585 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.150 nvme0n1 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.150 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.408 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.409 00:16:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.975 nvme0n1 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.975 00:16:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.541 nvme0n1 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.541 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.799 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.799 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.800 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.366 nvme0n1 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.366 00:16:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.932 nvme0n1 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.932 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDBlOTAzNDNjNTUzYzRhZGRkNDk5OGEwNjZjMWNhODGzo9iA: 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: ]] 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGRkM2NlY2UzNWQxMDc1MjkwODhlODIzNjlkZTU0ZmU3OWZjOWU1ODZmNmQzNjZjNDFlZWVkM2NjMmQ2Zjg3ZOgFsHU=: 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.190 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.191 00:16:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.122 nvme0n1 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.122 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.380 00:16:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.311 nvme0n1 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.311 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDQ2NGNmY2YwN2Q2YjEyNjA2NGZjMzA5NjhjZjdiMmZqMG6n: 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjQzZTBhNTQyMjBhYjU0ZDk4NmJkMmZlNGJkNjczNjfRRj3/: 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.569 00:16:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 nvme0n1 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 00:16:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5ZGU4ZWNkNDM3Nzk0OWQxZjY5Y2VjNGIzMTI5MTc2ZjVkN2ViZWIwYjgzZjA4u1yIsw==: 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmE4NmNiNjQxYjNiNTBkOWU1ZmYwMDdmODIxZmY2YjNsEt42: 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.518 00:16:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.903 nvme0n1 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGVkOTZkM2I0Njk2NjZhZjBiMTNkMzNmYjE0NTlmNGMwMjdjMGY5MWY2YjMzNTg5MTVhMGI1OGY4YzAwZDUzND2jxWE=: 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.903 00:16:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.837 nvme0n1 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.838 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzNmNGU1NzQ0M2E2YTk4ZmI5NDQ2ZWM0NDg3YmU4NjA3YzE2ZWU4NGU0NjViYTVhTpA8Ug==: 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGU2ZTc3YzIyMmQwMzQ2MGVkZGVkNTAxYTQ5YTBiYmY3NWE0ZDgwYTgyMTY3MTYwI+0F6g==: 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.096 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 request: 00:33:09.097 { 00:33:09.097 "name": "nvme0", 00:33:09.097 "trtype": "tcp", 00:33:09.097 "traddr": "10.0.0.1", 00:33:09.097 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.097 "adrfam": "ipv4", 00:33:09.097 "trsvcid": "4420", 00:33:09.097 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.097 "method": "bdev_nvme_attach_controller", 00:33:09.097 "req_id": 1 00:33:09.097 } 00:33:09.097 Got JSON-RPC error response 00:33:09.097 response: 00:33:09.097 { 00:33:09.097 "code": -5, 00:33:09.097 "message": "Input/output error" 00:33:09.097 } 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 request: 00:33:09.097 { 00:33:09.097 "name": "nvme0", 00:33:09.097 "trtype": "tcp", 00:33:09.097 "traddr": "10.0.0.1", 00:33:09.097 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.097 "adrfam": "ipv4", 00:33:09.097 "trsvcid": "4420", 00:33:09.097 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.097 "dhchap_key": "key2", 00:33:09.097 "method": "bdev_nvme_attach_controller", 00:33:09.097 "req_id": 1 00:33:09.097 } 00:33:09.097 Got JSON-RPC error response 00:33:09.097 response: 00:33:09.097 { 00:33:09.097 "code": -5, 00:33:09.097 "message": "Input/output error" 00:33:09.097 } 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.097 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.355 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.355 request: 00:33:09.355 { 00:33:09.356 "name": "nvme0", 00:33:09.356 "trtype": "tcp", 00:33:09.356 "traddr": "10.0.0.1", 00:33:09.356 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.356 "adrfam": "ipv4", 00:33:09.356 "trsvcid": "4420", 00:33:09.356 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.356 "dhchap_key": "key1", 00:33:09.356 "dhchap_ctrlr_key": "ckey2", 00:33:09.356 "method": "bdev_nvme_attach_controller", 00:33:09.356 "req_id": 1 00:33:09.356 } 00:33:09.356 Got JSON-RPC error response 00:33:09.356 response: 00:33:09.356 { 00:33:09.356 "code": -5, 00:33:09.356 "message": "Input/output error" 00:33:09.356 } 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:09.356 rmmod nvme_tcp 00:33:09.356 rmmod nvme_fabrics 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1368594 ']' 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1368594 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1368594 ']' 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1368594 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1368594 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1368594' 00:33:09.356 killing process with pid 1368594 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1368594 00:33:09.356 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1368594 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.616 00:16:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.517 00:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:11.518 00:16:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.518 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:11.518 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:11.518 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:11.518 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:11.518 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:11.776 00:16:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:12.713 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:12.713 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:12.713 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:13.650 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:33:13.650 00:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fHO /tmp/spdk.key-null.4zO /tmp/spdk.key-sha256.E0y /tmp/spdk.key-sha384.8WH /tmp/spdk.key-sha512.2Y8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:13.650 00:16:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.582 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:33:14.582 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:14.582 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:33:14.582 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:33:14.582 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:33:14.582 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:33:14.582 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:33:14.582 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:33:14.582 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:33:14.582 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:33:14.582 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:33:14.582 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:33:14.582 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:33:14.582 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:33:14.582 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:33:14.582 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:33:14.582 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:33:14.582 00:33:14.582 real 0m53.087s 00:33:14.582 user 0m50.095s 00:33:14.582 sys 0m5.107s 00:33:14.582 00:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:14.582 00:16:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.582 ************************************ 00:33:14.582 END TEST nvmf_auth_host 00:33:14.582 ************************************ 00:33:14.582 00:16:48 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:14.582 00:16:48 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.582 00:16:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:14.582 00:16:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:14.583 00:16:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:14.583 ************************************ 00:33:14.583 START TEST nvmf_digest 00:33:14.583 ************************************ 00:33:14.583 00:16:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:14.583 * Looking for test storage... 00:33:14.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:14.583 00:16:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.485 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:16.486 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:16.486 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:16.486 Found net devices under 0000:08:00.0: cvl_0_0 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:16.486 Found net devices under 0000:08:00.1: cvl_0_1 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:33:16.486 00:33:16.486 --- 10.0.0.2 ping statistics --- 00:33:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.486 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:33:16.486 00:33:16.486 --- 10.0.0.1 ping statistics --- 00:33:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.486 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.486 ************************************ 00:33:16.486 START TEST nvmf_digest_clean 00:33:16.486 ************************************ 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1376289 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1376289 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1376289 ']' 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.486 00:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.486 [2024-07-16 00:16:50.914602] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:16.486 [2024-07-16 00:16:50.914701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.486 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.486 [2024-07-16 00:16:50.979801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.745 [2024-07-16 00:16:51.069016] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.745 [2024-07-16 00:16:51.069077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.745 [2024-07-16 00:16:51.069092] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.745 [2024-07-16 00:16:51.069105] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.745 [2024-07-16 00:16:51.069118] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.745 [2024-07-16 00:16:51.069157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.745 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.003 null0 00:33:17.003 [2024-07-16 00:16:51.289483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.003 [2024-07-16 00:16:51.313669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1376308 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1376308 /var/tmp/bperf.sock 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1376308 ']' 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:17.003 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.004 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:17.004 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.004 [2024-07-16 00:16:51.364746] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:17.004 [2024-07-16 00:16:51.364839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376308 ] 00:33:17.004 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.004 [2024-07-16 00:16:51.425427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.004 [2024-07-16 00:16:51.512710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.261 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:17.261 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:17.261 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:17.261 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:17.261 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:17.519 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.519 00:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.085 nvme0n1 00:33:18.085 00:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:18.085 00:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.085 Running I/O for 2 seconds... 00:33:19.983 00:33:19.984 Latency(us) 00:33:19.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.984 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:19.984 nvme0n1 : 2.00 17240.79 67.35 0.00 0.00 7414.17 4247.70 17087.91 00:33:19.984 =================================================================================================================== 00:33:19.984 Total : 17240.79 67.35 0.00 0.00 7414.17 4247.70 17087.91 00:33:19.984 0 00:33:19.984 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:19.984 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:19.984 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:19.984 | select(.opcode=="crc32c") 00:33:19.984 | "\(.module_name) \(.executed)"' 00:33:19.984 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:19.984 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1376308 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1376308 ']' 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1376308 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376308 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376308' 00:33:20.241 killing process with pid 1376308 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1376308 00:33:20.241 Received shutdown signal, test time was about 2.000000 seconds 00:33:20.241 00:33:20.241 Latency(us) 00:33:20.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.241 =================================================================================================================== 00:33:20.241 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.241 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1376308 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1376692 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1376692 /var/tmp/bperf.sock 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1376692 ']' 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:20.498 00:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:20.498 [2024-07-16 00:16:54.964360] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:20.498 [2024-07-16 00:16:54.964457] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376692 ] 00:33:20.498 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:20.498 Zero copy mechanism will not be used. 00:33:20.498 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.755 [2024-07-16 00:16:55.024767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.755 [2024-07-16 00:16:55.115584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.755 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:20.755 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:20.755 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:20.755 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:20.755 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:21.317 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.317 00:16:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.574 nvme0n1 00:33:21.574 00:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:21.574 00:16:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:21.830 Zero copy mechanism will not be used. 00:33:21.830 Running I/O for 2 seconds... 00:33:23.753 00:33:23.753 Latency(us) 00:33:23.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.753 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:23.753 nvme0n1 : 2.00 5764.57 720.57 0.00 0.00 2771.05 625.02 10145.94 00:33:23.753 =================================================================================================================== 00:33:23.753 Total : 5764.57 720.57 0.00 0.00 2771.05 625.02 10145.94 00:33:23.753 0 00:33:23.753 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:23.753 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:23.753 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:23.753 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:23.753 | select(.opcode=="crc32c") 00:33:23.753 | "\(.module_name) \(.executed)"' 00:33:23.753 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1376692 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1376692 ']' 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1376692 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376692 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:24.012 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:24.013 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376692' 00:33:24.013 killing process with pid 1376692 00:33:24.013 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1376692 00:33:24.013 Received shutdown signal, test time was about 2.000000 seconds 00:33:24.013 00:33:24.013 Latency(us) 00:33:24.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.013 =================================================================================================================== 00:33:24.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.013 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1376692 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1376987 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1376987 /var/tmp/bperf.sock 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1376987 ']' 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:24.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.273 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:24.273 [2024-07-16 00:16:58.687393] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:24.273 [2024-07-16 00:16:58.687500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376987 ] 00:33:24.273 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.273 [2024-07-16 00:16:58.747708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.532 [2024-07-16 00:16:58.838462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.532 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:24.532 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:24.532 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:24.532 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:24.532 00:16:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:25.102 00:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.102 00:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:25.360 nvme0n1 00:33:25.360 00:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:25.360 00:16:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:25.360 Running I/O for 2 seconds... 00:33:27.286 00:33:27.286 Latency(us) 00:33:27.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.286 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.286 nvme0n1 : 2.01 17600.89 68.75 0.00 0.00 7253.87 3495.25 17379.18 00:33:27.286 =================================================================================================================== 00:33:27.286 Total : 17600.89 68.75 0.00 0.00 7253.87 3495.25 17379.18 00:33:27.286 0 00:33:27.544 00:17:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:27.544 00:17:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:27.544 00:17:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:27.544 00:17:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:27.544 00:17:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:27.544 | select(.opcode=="crc32c") 00:33:27.544 | "\(.module_name) \(.executed)"' 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1376987 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1376987 ']' 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1376987 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376987 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376987' 00:33:27.803 killing process with pid 1376987 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1376987 00:33:27.803 Received shutdown signal, test time was about 2.000000 seconds 00:33:27.803 00:33:27.803 Latency(us) 00:33:27.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.803 =================================================================================================================== 00:33:27.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1376987 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1377286 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1377286 /var/tmp/bperf.sock 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1377286 ']' 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:27.803 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.061 [2024-07-16 00:17:02.322408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:28.062 [2024-07-16 00:17:02.322489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377286 ] 00:33:28.062 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:28.062 Zero copy mechanism will not be used. 00:33:28.062 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.062 [2024-07-16 00:17:02.375871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.062 [2024-07-16 00:17:02.467231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.062 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:28.062 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:28.062 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:28.062 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:28.062 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:28.629 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.629 00:17:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.196 nvme0n1 00:33:29.196 00:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:29.196 00:17:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:29.196 Zero copy mechanism will not be used. 00:33:29.196 Running I/O for 2 seconds... 00:33:31.102 00:33:31.102 Latency(us) 00:33:31.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:31.102 nvme0n1 : 2.00 6102.64 762.83 0.00 0.00 2613.67 1905.40 11262.48 00:33:31.102 =================================================================================================================== 00:33:31.102 Total : 6102.64 762.83 0.00 0.00 2613.67 1905.40 11262.48 00:33:31.102 0 00:33:31.102 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:31.102 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:31.102 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:31.102 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:31.102 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:31.102 | select(.opcode=="crc32c") 00:33:31.102 | "\(.module_name) \(.executed)"' 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1377286 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1377286 ']' 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1377286 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1377286 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1377286' 00:33:31.672 killing process with pid 1377286 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1377286 00:33:31.672 Received shutdown signal, test time was about 2.000000 seconds 00:33:31.672 00:33:31.672 Latency(us) 00:33:31.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.672 =================================================================================================================== 00:33:31.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.672 00:17:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1377286 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1376289 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1376289 ']' 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1376289 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376289 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376289' 00:33:31.672 killing process with pid 1376289 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1376289 00:33:31.672 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1376289 00:33:31.931 00:33:31.931 real 0m15.425s 00:33:31.931 user 0m31.204s 00:33:31.931 sys 0m4.254s 00:33:31.931 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:31.931 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:31.931 ************************************ 00:33:31.931 END TEST nvmf_digest_clean 00:33:31.932 ************************************ 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 ************************************ 00:33:31.932 START TEST nvmf_digest_error 00:33:31.932 ************************************ 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1377691 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1377691 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1377691 ']' 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:31.932 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.932 [2024-07-16 00:17:06.375361] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:31.932 [2024-07-16 00:17:06.375451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.932 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.932 [2024-07-16 00:17:06.439038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.191 [2024-07-16 00:17:06.525845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.191 [2024-07-16 00:17:06.525904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.191 [2024-07-16 00:17:06.525919] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.191 [2024-07-16 00:17:06.525933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.191 [2024-07-16 00:17:06.525945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.191 [2024-07-16 00:17:06.525973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.191 [2024-07-16 00:17:06.654739] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.191 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.449 null0 00:33:32.449 [2024-07-16 00:17:06.758428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.449 [2024-07-16 00:17:06.782637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1377756 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1377756 /var/tmp/bperf.sock 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1377756 ']' 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.449 00:17:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.449 [2024-07-16 00:17:06.833124] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:32.449 [2024-07-16 00:17:06.833236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377756 ] 00:33:32.449 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.449 [2024-07-16 00:17:06.892988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.707 [2024-07-16 00:17:06.981360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.707 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.707 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:32.707 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.707 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:32.966 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.226 nvme0n1 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:33.226 00:17:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.486 Running I/O for 2 seconds... 00:33:33.486 [2024-07-16 00:17:07.862478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.486 [2024-07-16 00:17:07.862530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.486 [2024-07-16 00:17:07.862552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.486 [2024-07-16 00:17:07.878803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.486 [2024-07-16 00:17:07.878840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.486 [2024-07-16 00:17:07.878859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.894219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.894261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.910942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.910975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.910994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.925468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.925526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.939640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.939673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.939692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.952833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.952866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.952885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.970898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.970930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.986176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.986208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.986227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.487 [2024-07-16 00:17:07.998403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.487 [2024-07-16 00:17:07.998435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.487 [2024-07-16 00:17:07.998461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.013713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.013745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.747 [2024-07-16 00:17:08.013764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.027905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.027937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.747 [2024-07-16 00:17:08.027964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.042667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.042699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.747 [2024-07-16 00:17:08.042718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.056756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.056798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.747 [2024-07-16 00:17:08.056816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.071349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.071381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.747 [2024-07-16 00:17:08.071400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.747 [2024-07-16 00:17:08.085167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.747 [2024-07-16 00:17:08.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.099098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.099144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.099164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.113211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.113242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.113261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.127298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.127336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.127356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.144077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.144108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.144127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.158022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.158054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.158073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.173973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.174005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.174024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.187006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.187050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.187069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.202328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.202360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.202381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.216476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.216508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.216526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.230968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.246260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.246291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.246310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.748 [2024-07-16 00:17:08.260874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:33.748 [2024-07-16 00:17:08.260906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.748 [2024-07-16 00:17:08.260925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.008 [2024-07-16 00:17:08.275061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.008 [2024-07-16 00:17:08.275094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.008 [2024-07-16 00:17:08.275112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.008 [2024-07-16 00:17:08.287723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.287755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.304588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.304622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.304641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.318211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.318242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.318260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.334624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.334656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.334675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.348020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.348052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.348071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.364933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.364983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.378156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.378214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.395694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.395730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.395748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.410452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.410484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.410502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.423581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.423612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.423630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.437929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.437965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.437983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.452658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.452689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.452708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.468359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.468391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.468409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.480804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.480835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.480853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.495828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.495859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.495877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.009 [2024-07-16 00:17:08.512321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.009 [2024-07-16 00:17:08.512352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.009 [2024-07-16 00:17:08.512371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.524833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.524865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.524884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.541078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.541111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.541129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.555227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.555258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.555276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.570436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.570467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.570486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.583371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.583402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.583420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.600741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.600773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.600792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.616933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.616964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.616982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.630065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.630096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.630126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.644222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.644253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.644271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.658371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.658402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.658421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.672557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.672588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.672606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.687041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.687076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.687094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.701302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.701336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.701355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.715670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.715704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.715723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.729888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.729921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.729940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.744199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.744231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.744250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.758530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.758572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.758592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.269 [2024-07-16 00:17:08.772767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.269 [2024-07-16 00:17:08.772809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.269 [2024-07-16 00:17:08.772828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.528 [2024-07-16 00:17:08.787118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.528 [2024-07-16 00:17:08.787163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.528 [2024-07-16 00:17:08.787183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.528 [2024-07-16 00:17:08.801465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.528 [2024-07-16 00:17:08.801496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.528 [2024-07-16 00:17:08.801515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.528 [2024-07-16 00:17:08.815743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.528 [2024-07-16 00:17:08.815776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.528 [2024-07-16 00:17:08.815794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.528 [2024-07-16 00:17:08.830017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.528 [2024-07-16 00:17:08.830051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.528 [2024-07-16 00:17:08.830070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.528 [2024-07-16 00:17:08.844271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.844304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.844323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.858516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.858567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.872792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.872824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.872842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.886999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.887031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.887050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.903758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.903791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.903810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.917978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.918010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.918029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.932220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.932252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.932271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.946531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.946563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.946581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.960712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.960746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.960765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.975229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.975264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.975283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:08.988067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:08.988099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:08.988117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:09.002374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:09.002406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:09.002434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:09.016860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:09.016894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:09.016913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.529 [2024-07-16 00:17:09.031252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.529 [2024-07-16 00:17:09.031290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.529 [2024-07-16 00:17:09.031309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.045598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.787 [2024-07-16 00:17:09.045632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.787 [2024-07-16 00:17:09.045651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.060007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.787 [2024-07-16 00:17:09.060039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.787 [2024-07-16 00:17:09.060058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.074293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.787 [2024-07-16 00:17:09.074325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.787 [2024-07-16 00:17:09.074343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.088622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.787 [2024-07-16 00:17:09.088653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.787 [2024-07-16 00:17:09.088672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.102900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.787 [2024-07-16 00:17:09.102932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.787 [2024-07-16 00:17:09.102950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.787 [2024-07-16 00:17:09.118023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.118055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.118079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.134311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.134343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.134362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.148593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.148625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.148644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.162087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.162120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.162146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.176387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.176420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.192554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.192586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.192605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.205335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.205366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.205384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.221300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.221333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.221353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.235587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.235619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.249865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.249897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.249926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.264118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.264156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.264176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.278346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.278378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.278396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:34.788 [2024-07-16 00:17:09.292605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:34.788 [2024-07-16 00:17:09.292636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.788 [2024-07-16 00:17:09.292655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.306907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.046 [2024-07-16 00:17:09.306940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.046 [2024-07-16 00:17:09.306959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.322300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.046 [2024-07-16 00:17:09.322333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.046 [2024-07-16 00:17:09.322352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.335159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.046 [2024-07-16 00:17:09.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.046 [2024-07-16 00:17:09.335210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.349095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.046 [2024-07-16 00:17:09.349127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.046 [2024-07-16 00:17:09.349152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.365226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.046 [2024-07-16 00:17:09.365257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.046 [2024-07-16 00:17:09.365276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.046 [2024-07-16 00:17:09.378847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.378907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.396607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.396639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.396658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.413116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.413158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.413178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.428165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.428196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.428215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.440802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.440834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.455994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.456025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.456044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.470954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.470985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.471004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.483854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.483885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.483904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.498117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.498155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.498175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.512401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.512434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.526786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.526819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.526838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.541048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.541080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.541098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.047 [2024-07-16 00:17:09.555348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.047 [2024-07-16 00:17:09.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.047 [2024-07-16 00:17:09.555397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.572526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.572560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.572579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.587168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.587200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.598757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.598788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.598807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.614332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.614364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.614383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.628723] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.628754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.641697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.641729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.641747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.305 [2024-07-16 00:17:09.657783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.305 [2024-07-16 00:17:09.657816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.305 [2024-07-16 00:17:09.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.672165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.672205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.672224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.686491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.686522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.686541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.700777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.700808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.700826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.715018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.715049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.715067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.729218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.729255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.729273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.743508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.743540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.743559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.757692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.757732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.757751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.773111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.773151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.773172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.786130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.786170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.786189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.800573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.800605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.800624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.306 [2024-07-16 00:17:09.814685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.306 [2024-07-16 00:17:09.814716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.306 [2024-07-16 00:17:09.814736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.564 [2024-07-16 00:17:09.829978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.564 [2024-07-16 00:17:09.830013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.564 [2024-07-16 00:17:09.830031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.564 [2024-07-16 00:17:09.842485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1326590) 00:33:35.564 [2024-07-16 00:17:09.842516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:35.564 [2024-07-16 00:17:09.842535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:35.564 00:33:35.564 Latency(us) 00:33:35.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:35.564 nvme0n1 : 2.01 17432.70 68.10 0.00 0.00 7329.40 4150.61 20388.98 00:33:35.564 =================================================================================================================== 00:33:35.564 Total : 17432.70 68.10 0.00 0.00 7329.40 4150.61 20388.98 00:33:35.564 0 00:33:35.564 00:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:35.564 00:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:35.564 00:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:35.564 00:17:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:35.564 | .driver_specific 00:33:35.564 | .nvme_error 00:33:35.564 | .status_code 00:33:35.564 | .command_transient_transport_error' 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1377756 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1377756 ']' 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1377756 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1377756 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1377756' 00:33:35.821 killing process with pid 1377756 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1377756 00:33:35.821 Received shutdown signal, test time was about 2.000000 seconds 00:33:35.821 00:33:35.821 Latency(us) 00:33:35.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.821 =================================================================================================================== 00:33:35.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:35.821 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1377756 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1378096 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1378096 /var/tmp/bperf.sock 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1378096 ']' 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.080 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.080 [2024-07-16 00:17:10.395234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:36.080 [2024-07-16 00:17:10.395331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378096 ] 00:33:36.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.080 Zero copy mechanism will not be used. 00:33:36.080 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.080 [2024-07-16 00:17:10.455546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.080 [2024-07-16 00:17:10.546428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.339 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:36.339 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:36.339 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.339 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.597 00:17:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.855 nvme0n1 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:36.855 00:17:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.115 Zero copy mechanism will not be used. 00:33:37.115 Running I/O for 2 seconds... 00:33:37.115 [2024-07-16 00:17:11.442904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.442963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.442986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.448998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.449033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.449052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.455215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.455256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.455285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.461226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.461265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.461286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.466956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.466989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.472067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.472119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.479562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.479595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.479614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.487324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.487358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.487377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.495104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.495147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.495168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.502801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.502835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.502854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.510530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.510564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.510583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.518315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.518356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.518375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.526053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.526087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.526105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.533729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.533762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.533780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.541416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.115 [2024-07-16 00:17:11.541450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.115 [2024-07-16 00:17:11.541469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.115 [2024-07-16 00:17:11.549168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.549201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.549220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.556935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.556968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.556987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.564642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.564674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.564693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.572370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.572402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.572421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.580097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.580130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.580159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.587811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.587843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.594541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.594573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.594592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.600156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.600188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.606247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.606279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.606297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.611348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.611399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.616455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.616486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.616505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.621519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.621550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.621568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.116 [2024-07-16 00:17:11.626682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.116 [2024-07-16 00:17:11.626715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.116 [2024-07-16 00:17:11.626733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.631829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.631861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.631886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.636997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.637030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.637047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.642123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.642181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.647307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.647339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.647357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.652384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.652415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.652433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.658489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.658521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.658539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.663734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.663765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.375 [2024-07-16 00:17:11.663784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.375 [2024-07-16 00:17:11.668887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.375 [2024-07-16 00:17:11.668919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.668937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.673963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.673996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.674014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.679091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.679122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.679147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.684284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.684315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.684333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.689370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.689401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.689419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.694510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.694542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.694560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.699694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.699725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.699743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.705509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.705542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.705560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.711322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.711355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.711375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.716529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.716581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.722438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.722472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.722497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.728665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.728698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.728716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.733836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.733869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.733888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.738941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.738972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.738990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.744086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.744119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.744143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.749242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.749273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.749292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.754369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.754400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.759568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.759617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.764686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.764718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.764736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.769936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.769972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.769990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.775552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.775585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.775604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.781322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.781355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.781375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.787728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.787761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.787782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.793333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.793366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.793386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.799315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.799367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.806337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.806372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.806391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.812275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.812309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.812330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.818117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.818158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.822361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.822393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.822412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.376 [2024-07-16 00:17:11.827426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.376 [2024-07-16 00:17:11.827459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.376 [2024-07-16 00:17:11.827477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.834660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.834694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.834713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.840412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.840446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.840465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.846039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.846071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.846090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.851181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.851213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.851232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.856182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.856213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.856231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.861278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.861310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.861328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.866292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.866323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.866348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.871364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.871395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.871413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.876315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.876346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.876364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.881270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.881301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.881319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.377 [2024-07-16 00:17:11.886316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.377 [2024-07-16 00:17:11.886347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.377 [2024-07-16 00:17:11.886365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.891399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.891431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.891449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.896482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.896514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.896531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.901554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.901585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.901603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.906597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.906649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.911632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.911669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.911688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.916696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.916727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.916746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.921661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.921691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.921709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.926705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.926736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.926754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.931688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.931720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.931737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.936691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.936721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.936740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.941692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.941722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.941740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.946848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.946880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.946898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.637 [2024-07-16 00:17:11.951874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.637 [2024-07-16 00:17:11.951905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.637 [2024-07-16 00:17:11.951923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.956861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.956891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.956909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.961903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.961934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.961952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.966933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.966964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.966981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.972004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.972037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.977020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.977050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.977068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.982129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.982166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.982185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.987221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.987251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.987269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.992262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.992295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.992313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:11.997279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:11.997310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:11.997335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.002300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.002331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.002349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.007318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.007350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.007368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.012335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.012366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.012384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.017390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.017420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.017439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.022389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.022421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.022438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.027424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.027454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.027473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.032522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.032554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.032573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.037518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.037555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.037573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.042481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.042515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.047455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.047487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.047506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.052507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.052539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.057517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.057549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.057567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.062505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.062554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.067526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.067558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.067577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.072563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.072595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.072613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.077573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.077604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.077622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.082564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.082595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.082619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.087564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.087596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.087614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.092520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.092552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.092570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.638 [2024-07-16 00:17:12.097535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.638 [2024-07-16 00:17:12.097565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.638 [2024-07-16 00:17:12.097583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.102475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.102505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.102524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.107487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.107518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.107536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.112514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.112545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.112563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.117493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.117523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.117542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.122639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.122670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.126866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.126905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.126924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.129962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.129994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.130012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.133737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.133768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.133787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.137440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.137490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.140185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.140216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.140234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.144173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.144204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.144223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.639 [2024-07-16 00:17:12.149184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.639 [2024-07-16 00:17:12.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.639 [2024-07-16 00:17:12.149235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.153839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.153882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.153902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.158923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.158958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.162324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.162355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.162373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.166380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.166413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.166431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.170753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.170804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.174357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.174388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.174406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.178097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.178129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.178155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.182786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.182818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.182836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.187828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.187859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.187877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.192908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.192939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.192966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.198030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.198060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.198086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.940 [2024-07-16 00:17:12.203150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.940 [2024-07-16 00:17:12.203181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.940 [2024-07-16 00:17:12.203199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.208191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.208222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.208241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.213280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.213329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.218219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.218251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.218269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.223300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.223331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.223349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.228423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.228455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.228475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.233467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.233499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.233517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.238483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.238513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.238531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.243591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.243629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.243649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.248758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.248790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.248808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.253980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.254013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.254031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.259103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.259135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.259163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.264294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.264325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.264343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.269397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.269428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.274561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.274593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.274611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.279684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.279715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.279734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.284870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.284902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.284920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.290547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.290581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.290599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.296360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.296393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.296412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.303384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.303418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.303438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.310447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.310482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.310501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.314209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.314242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.314260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.320706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.320739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.320759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.325805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.325839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.325857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.331673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.331707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.331726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.337464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.337498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.337524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.343590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.343623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.343643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.349594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.349627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.349647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.355163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.355206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.355225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.360852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.360886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.366816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.366849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.366869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.372314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.372347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.372367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.377939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.377972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.377993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.383148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.383179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.383198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.941 [2024-07-16 00:17:12.388530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.941 [2024-07-16 00:17:12.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.941 [2024-07-16 00:17:12.388582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.394284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.394319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.394338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.398570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.398603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.398621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.401887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.401920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.401938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.406690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.406724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.406742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.411582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.411615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.411632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.416563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.416594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.416612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.421662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.421694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.421712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.426698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.426729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.426753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.431673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.431704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.431723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.436578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.436609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.436627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.441484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.441517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.441535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.942 [2024-07-16 00:17:12.446604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:37.942 [2024-07-16 00:17:12.446638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.942 [2024-07-16 00:17:12.446656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.451844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.451879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.219 [2024-07-16 00:17:12.451897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.456731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.456764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.219 [2024-07-16 00:17:12.456782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.461971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.462004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.219 [2024-07-16 00:17:12.462022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.466970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.467001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.219 [2024-07-16 00:17:12.467020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.471826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.471868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.219 [2024-07-16 00:17:12.471887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.219 [2024-07-16 00:17:12.476851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.219 [2024-07-16 00:17:12.476887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.476905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.481893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.481943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.486861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.486893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.486911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.491908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.491939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.491956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.496928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.496959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.496977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.501927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.501958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.501977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.506929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.506960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.511958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.511989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.512007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.516954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.516984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.517002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.521962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.521993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.522011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.526990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.527021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.527039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.532061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.532092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.532110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.537067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.537098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.537116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.542019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.542051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.542070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.547507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.547540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.547559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.553373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.553405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.553424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.559514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.559547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.559573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.564973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.565005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.565024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.570094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.570126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.570153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.575207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.575239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.575257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.580263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.580294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.580312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.585344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.585374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.585392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.590414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.590445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.590464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.595442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.595473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.595491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.600448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.600479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.600497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.605511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.605551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.605570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.610536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.610567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.610585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.615639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.615671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.615689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.620651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.620682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.620700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.220 [2024-07-16 00:17:12.625680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.220 [2024-07-16 00:17:12.625710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.220 [2024-07-16 00:17:12.625728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.630629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.630660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.630678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.635598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.635630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.635648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.640496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.640527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.640544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.645451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.645482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.645500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.650364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.650395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.655329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.655359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.660279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.660310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.660329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.665302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.665334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.665352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.670342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.670374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.670393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.675492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.675542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.680625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.680656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.680674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.685767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.685798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.685817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.690919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.690952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.690977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.695966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.695997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.696015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.701053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.701084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.701102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.706384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.706417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.706435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.712031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.712063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.712082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.717706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.717739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.717758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.723127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.723168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.723188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.221 [2024-07-16 00:17:12.728914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.221 [2024-07-16 00:17:12.728947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.221 [2024-07-16 00:17:12.728966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.734880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.734915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.734933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.740447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.740481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.740500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.746185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.746219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.746238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.751253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.751285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.751303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.756644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.756677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.756695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.762037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.762070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.762088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.765178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.765210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.765228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.770691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.770723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.770742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.776402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.776453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.781982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.782014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.782039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.786965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.786997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.792076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.792108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.792126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.796831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.796862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.801796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.801826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.801845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.806720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.806751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.806769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.811656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.811687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.811705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.816744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.816775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.816793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.821862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.821894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.821912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.826885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.826923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.826941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.831833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.831864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.831883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.836825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.836855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.836873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.841788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.841819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.841837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.846867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.846898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.481 [2024-07-16 00:17:12.846916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.481 [2024-07-16 00:17:12.851819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.481 [2024-07-16 00:17:12.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.851869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.856765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.856795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.856813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.861745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.861776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.861794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.866704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.866734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.866752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.871765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.871796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.871814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.876821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.876851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.876869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.881965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.881995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.882013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.886977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.887008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.887026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.892118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.892176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.897131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.897170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.897188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.902684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.902717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.902736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.908197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.908241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.908260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.913388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.913421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.913446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.918396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.918428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.918446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.923446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.923477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.923496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.928477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.928507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.928525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.933479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.933510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.933528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.938573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.938605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.938624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.943694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.943725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.943744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.948778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.948809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.954612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.954645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.954664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.960234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.960272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.960292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.965316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.965348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.965367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.970538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.970570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.970588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.975632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.975666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.975684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.980854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.980885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.980903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.482 [2024-07-16 00:17:12.985885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.482 [2024-07-16 00:17:12.985915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.482 [2024-07-16 00:17:12.985933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.483 [2024-07-16 00:17:12.991064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.483 [2024-07-16 00:17:12.991097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.483 [2024-07-16 00:17:12.991114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:12.996279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:12.996312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:12.996330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.001332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.001364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.006327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.006361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.006380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.011275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.011307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.011325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.016247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.016278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.021232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.021263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.021281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.026111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.026151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.026171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.031054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.031085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.031103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.036078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.036109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.036127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.040998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.041030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.041048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.046084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.046115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.046147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.051063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.051094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.051112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.056125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.056164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.056183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.061086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.061119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.061148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.066047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.066096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.071052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.071082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.071100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.076037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.076067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.076085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.081102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.081133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.081160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.742 [2024-07-16 00:17:13.086146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.742 [2024-07-16 00:17:13.086177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.742 [2024-07-16 00:17:13.086196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.091247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.091278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.091297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.096312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.096344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.096361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.101329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.101360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.101379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.106562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.106594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.106613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.111753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.111784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.111802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.117057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.117089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.117107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.122714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.122766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.128591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.128624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.128644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.134435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.134468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.134498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.141439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.141474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.141493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.149847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.149881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.149900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.157306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.157340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.157359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.164034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.164068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.164088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.171021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.171056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.171075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.176533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.176567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.176586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.182307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.182341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.182360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.188279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.188312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.188331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.194131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.194177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.194197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.200254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.200287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.200306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.206232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.206284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.212068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.212100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.212119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.218044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.218077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.218096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.223751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.223783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.223802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.228842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.228875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.228894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.232426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.232458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.237956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.237988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.238007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.243599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.243631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.743 [2024-07-16 00:17:13.243650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.743 [2024-07-16 00:17:13.249258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.743 [2024-07-16 00:17:13.249291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.744 [2024-07-16 00:17:13.249310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:38.744 [2024-07-16 00:17:13.254450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:38.744 [2024-07-16 00:17:13.254484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.744 [2024-07-16 00:17:13.254503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.001 [2024-07-16 00:17:13.259488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.001 [2024-07-16 00:17:13.259520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.001 [2024-07-16 00:17:13.259539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.001 [2024-07-16 00:17:13.264492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.001 [2024-07-16 00:17:13.264524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.001 [2024-07-16 00:17:13.264542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.001 [2024-07-16 00:17:13.269479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.001 [2024-07-16 00:17:13.269511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.001 [2024-07-16 00:17:13.269529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.001 [2024-07-16 00:17:13.274614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.001 [2024-07-16 00:17:13.274645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.001 [2024-07-16 00:17:13.274662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.001 [2024-07-16 00:17:13.279562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.279593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.279611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.284608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.284639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.284664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.289572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.289603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.289621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.294583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.294613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.294631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.299581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.299612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.299631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.304583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.304615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.304633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.309553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.309584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.309602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.314515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.314546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.314563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.319531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.319562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.319581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.324456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.324488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.324506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.329340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.329378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.329396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.334218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.334257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.334276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.339711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.339742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.339760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.344196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.344227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.344245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.349001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.349031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.349049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.354002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.354053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.359153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.359184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.359203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.364709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.364742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.364760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.370133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.370173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.370198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.376181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.376215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.376235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.382405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.382438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.387695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.387727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.387747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.393456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.393489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.393510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.399262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.399295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.399314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.405506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.405539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.410405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.410438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.410456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.416793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.416826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.416845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.422963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.423023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.429294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.429327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.429346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:39.002 [2024-07-16 00:17:13.434910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b8bf70) 00:33:39.002 [2024-07-16 00:17:13.434943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.002 [2024-07-16 00:17:13.434961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.002 00:33:39.002 Latency(us) 00:33:39.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.003 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:39.003 nvme0n1 : 2.00 5820.75 727.59 0.00 0.00 2743.44 719.08 11505.21 00:33:39.003 =================================================================================================================== 00:33:39.003 Total : 5820.75 727.59 0.00 0.00 2743.44 719.08 11505.21 00:33:39.003 0 00:33:39.003 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:39.003 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:39.003 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:39.003 | .driver_specific 00:33:39.003 | .nvme_error 00:33:39.003 | .status_code 00:33:39.003 | .command_transient_transport_error' 00:33:39.003 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 376 > 0 )) 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1378096 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1378096 ']' 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1378096 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:39.259 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1378096 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1378096' 00:33:39.517 killing process with pid 1378096 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1378096 00:33:39.517 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.517 00:33:39.517 Latency(us) 00:33:39.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.517 =================================================================================================================== 00:33:39.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1378096 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1378398 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1378398 /var/tmp/bperf.sock 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1378398 ']' 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:39.517 00:17:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.517 [2024-07-16 00:17:13.990262] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:39.517 [2024-07-16 00:17:13.990365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378398 ] 00:33:39.517 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.775 [2024-07-16 00:17:14.050698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.775 [2024-07-16 00:17:14.141471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.775 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:39.775 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:39.775 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:39.775 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:40.033 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:40.033 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.033 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.291 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.291 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.291 00:17:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.550 nvme0n1 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:40.808 00:17:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.808 Running I/O for 2 seconds... 00:33:40.808 [2024-07-16 00:17:15.216732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190fa3a0 00:33:40.808 [2024-07-16 00:17:15.217903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.217947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.231175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190eaab8 00:33:40.808 [2024-07-16 00:17:15.232501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.232534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.245634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190e8d30 00:33:40.808 [2024-07-16 00:17:15.247149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.247181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.261911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190e8d30 00:33:40.808 [2024-07-16 00:17:15.264164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.264195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.273808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190fc128 00:33:40.808 [2024-07-16 00:17:15.275302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.287400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190edd58 00:33:40.808 [2024-07-16 00:17:15.288874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.288904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.301202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190e0630 00:33:40.808 [2024-07-16 00:17:15.302680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.302710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:40.808 [2024-07-16 00:17:15.314756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190ee5c8 00:33:40.808 [2024-07-16 00:17:15.316234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:40.808 [2024-07-16 00:17:15.316264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:41.066 [2024-07-16 00:17:15.327267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190fb048 00:33:41.066 [2024-07-16 00:17:15.328723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.066 [2024-07-16 00:17:15.328753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:41.066 [2024-07-16 00:17:15.340847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190eee38 00:33:41.066 [2024-07-16 00:17:15.342297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.342327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.355310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190e84c0 00:33:41.067 [2024-07-16 00:17:15.356975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.357005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.369642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190e0ea0 00:33:41.067 [2024-07-16 00:17:15.371493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.371523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.382593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.382812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.382842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.397365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.397574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.397603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.412281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.412494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.412524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.426946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.427165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.427195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.441656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.441864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.441893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.456342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.456552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.456580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.471004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.471225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.471254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.485727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.485937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.485965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.500416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.500652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.515123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.515345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.515375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.529804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.530013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.530041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.544491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.544702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.544731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.559114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.559332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.559373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.067 [2024-07-16 00:17:15.573825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.067 [2024-07-16 00:17:15.574034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.067 [2024-07-16 00:17:15.574062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.588519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.588727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.588756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.603228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.603441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.603469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.617934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.618150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.618185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.632638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.632848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.632877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.647303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.647516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.647544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.661998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.662252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.676716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.676923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.676953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.691393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.324 [2024-07-16 00:17:15.691608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.324 [2024-07-16 00:17:15.691636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.324 [2024-07-16 00:17:15.706032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.706251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.720736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.720947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.720975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.735358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.735578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.750106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.750361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.764768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.764982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.765011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.779469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.779688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.794106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.794323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.794352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.808739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.808946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.808974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.823414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.823624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.823653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.325 [2024-07-16 00:17:15.838066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.325 [2024-07-16 00:17:15.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.325 [2024-07-16 00:17:15.838314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.852754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.852993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.867427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.867641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.867671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.882095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.882315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.896772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.896991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.897020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.911469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.911680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.911708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.926101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.926318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.926347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.940739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.940951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.940986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.955381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.955596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.955624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.970041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.970257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.582 [2024-07-16 00:17:15.970288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.582 [2024-07-16 00:17:15.984661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.582 [2024-07-16 00:17:15.984873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:15.984902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:15.999326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:15.999535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:15.999564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.013966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.014184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.014216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.028617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.028829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.043285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.043494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.043522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.057973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.058225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.072621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.072834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.072864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.583 [2024-07-16 00:17:16.087295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.583 [2024-07-16 00:17:16.087507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.583 [2024-07-16 00:17:16.087536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.101955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.102164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.116665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.116880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.116908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.131294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.131504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.131532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.145942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.146157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.146186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.160636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.160847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.160875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.175305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.175514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.175542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.189954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.190162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.190191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.204600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.204817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.204845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.219266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.219479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.219506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.233926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.234144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.234173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.248629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.248844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.248872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.263415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.263630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.840 [2024-07-16 00:17:16.263661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.840 [2024-07-16 00:17:16.278197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.840 [2024-07-16 00:17:16.278407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.278436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.841 [2024-07-16 00:17:16.293000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.841 [2024-07-16 00:17:16.293230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.293260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.841 [2024-07-16 00:17:16.307801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.841 [2024-07-16 00:17:16.308011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.308041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.841 [2024-07-16 00:17:16.322604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.841 [2024-07-16 00:17:16.322817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.322854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.841 [2024-07-16 00:17:16.337316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.841 [2024-07-16 00:17:16.337529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.337558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:41.841 [2024-07-16 00:17:16.352153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:41.841 [2024-07-16 00:17:16.352366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:41.841 [2024-07-16 00:17:16.352395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.366890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.367102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.367131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.381693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.381910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.381939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.396446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.396657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.396685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.411426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.411638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.411668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.426174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.426387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.426415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.440998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.441226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.441254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.455752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.455963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.470525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.470737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.470765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.485239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.485451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.485479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.500004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.500226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.500255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.514725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.514937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.514966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.529452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.097 [2024-07-16 00:17:16.529675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.097 [2024-07-16 00:17:16.529703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.097 [2024-07-16 00:17:16.544241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.098 [2024-07-16 00:17:16.544450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.098 [2024-07-16 00:17:16.544481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.098 [2024-07-16 00:17:16.558980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.098 [2024-07-16 00:17:16.559200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.098 [2024-07-16 00:17:16.559230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.098 [2024-07-16 00:17:16.573782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.098 [2024-07-16 00:17:16.573993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.098 [2024-07-16 00:17:16.574022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.098 [2024-07-16 00:17:16.588580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.098 [2024-07-16 00:17:16.588804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.098 [2024-07-16 00:17:16.588833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.098 [2024-07-16 00:17:16.603351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.098 [2024-07-16 00:17:16.603561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.098 [2024-07-16 00:17:16.603590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.618131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.618358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.632919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.633130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.633167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.647697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.647909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.647938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.662485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.662698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.662727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.677303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.677514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.677542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.692150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.692366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.692395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.706994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.353 [2024-07-16 00:17:16.707221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.353 [2024-07-16 00:17:16.707256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.353 [2024-07-16 00:17:16.721972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.722202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.722232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.736852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.737066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.737095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.751700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.751927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.751955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.766588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.766800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.766829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.781394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.781605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.781634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.796217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.796443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.796471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.811020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.811238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.825749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.825959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.825987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.840517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.840730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.840767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.354 [2024-07-16 00:17:16.855232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.354 [2024-07-16 00:17:16.855453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.354 [2024-07-16 00:17:16.855481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.650 [2024-07-16 00:17:16.870039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.650 [2024-07-16 00:17:16.870270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.650 [2024-07-16 00:17:16.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.650 [2024-07-16 00:17:16.884829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.650 [2024-07-16 00:17:16.885043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.650 [2024-07-16 00:17:16.885071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.650 [2024-07-16 00:17:16.899661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.650 [2024-07-16 00:17:16.899875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.650 [2024-07-16 00:17:16.899905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.650 [2024-07-16 00:17:16.914484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.650 [2024-07-16 00:17:16.914697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.650 [2024-07-16 00:17:16.914725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.650 [2024-07-16 00:17:16.929277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.650 [2024-07-16 00:17:16.929496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.650 [2024-07-16 00:17:16.929524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:16.944034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:16.944253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:16.944282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:16.958806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:16.959018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:16.959046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:16.973528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:16.973748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:16.973778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:16.988270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:16.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:16.988512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.003055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.003277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.003305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.017759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.017969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.018008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.032551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.032766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.032795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.047291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.047506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.047535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.062012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.062231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.062260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.076770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.076982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.077011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.091485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.091725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.106261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.106472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.106500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.120987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.121212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.121241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.136861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.137173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.137214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.651 [2024-07-16 00:17:17.152745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.651 [2024-07-16 00:17:17.153001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.651 [2024-07-16 00:17:17.153043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.918 [2024-07-16 00:17:17.167653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.918 [2024-07-16 00:17:17.167893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.918 [2024-07-16 00:17:17.167929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.918 [2024-07-16 00:17:17.182515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.918 [2024-07-16 00:17:17.182734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.918 [2024-07-16 00:17:17.182765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.918 [2024-07-16 00:17:17.197326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6f7f0) with pdu=0x2000190de8a8 00:33:42.918 [2024-07-16 00:17:17.197543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:42.918 [2024-07-16 00:17:17.197574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:42.918 00:33:42.918 Latency(us) 00:33:42.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.918 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.918 nvme0n1 : 2.01 17350.83 67.78 0.00 0.00 7359.12 3422.44 18932.62 00:33:42.918 =================================================================================================================== 00:33:42.918 Total : 17350.83 67.78 0.00 0.00 7359.12 3422.44 18932.62 00:33:42.918 0 00:33:42.918 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:42.918 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:42.918 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:42.918 | .driver_specific 00:33:42.918 | .nvme_error 00:33:42.918 | .status_code 00:33:42.918 | .command_transient_transport_error' 00:33:42.918 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1378398 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1378398 ']' 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1378398 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1378398 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1378398' 00:33:43.175 killing process with pid 1378398 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1378398 00:33:43.175 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.175 00:33:43.175 Latency(us) 00:33:43.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.175 =================================================================================================================== 00:33:43.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.175 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1378398 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1378692 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1378692 /var/tmp/bperf.sock 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1378692 ']' 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:43.432 00:17:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.432 [2024-07-16 00:17:17.760897] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:43.432 [2024-07-16 00:17:17.761004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378692 ] 00:33:43.432 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.432 Zero copy mechanism will not be used. 00:33:43.432 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.432 [2024-07-16 00:17:17.821704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.432 [2024-07-16 00:17:17.912712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.689 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:43.689 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:43.689 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.689 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.946 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.510 nvme0n1 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:44.510 00:17:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:44.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.510 Zero copy mechanism will not be used. 00:33:44.510 Running I/O for 2 seconds... 00:33:44.510 [2024-07-16 00:17:18.924655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.510 [2024-07-16 00:17:18.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.510 [2024-07-16 00:17:18.924973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.510 [2024-07-16 00:17:18.930621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.510 [2024-07-16 00:17:18.931053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.510 [2024-07-16 00:17:18.931158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.510 [2024-07-16 00:17:18.936350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.510 [2024-07-16 00:17:18.936835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.510 [2024-07-16 00:17:18.936891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.510 [2024-07-16 00:17:18.941963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.510 [2024-07-16 00:17:18.942399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.942485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.947416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.947865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.947949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.952902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.953290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.953326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.958392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.958778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.958852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.964240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.964681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.964716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.969961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.970454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.970524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.975085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.975541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.975587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.980291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.980657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.980733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.985433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.985998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.990625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.991271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.991321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:18.995914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:18.996103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:18.996194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:19.001720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:19.002068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:19.002159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:19.006941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:19.007351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:19.007451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:19.012157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:19.012599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:19.012672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:19.017132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:19.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:19.017632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.511 [2024-07-16 00:17:19.022449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.511 [2024-07-16 00:17:19.022817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.511 [2024-07-16 00:17:19.022851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.029067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.029526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.029608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.034025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.034520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.034630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.038966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.039432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.039485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.043973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.044559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.044645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.048917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.049388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.049449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.054058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.054458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.058887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.059330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.059462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.063830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.064296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.064374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.068827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.069254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.069305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.073842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.074248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.074308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.078763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.079244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.079279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.083847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.084232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.084307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.088873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.089512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.089583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.093843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.094345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.094421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.098770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.099271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.099345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.103666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.104161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.104265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.108714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.109238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.109325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.113629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.113987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.114070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.770 [2024-07-16 00:17:19.118549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.770 [2024-07-16 00:17:19.119078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.770 [2024-07-16 00:17:19.119112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.123753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.124370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.124404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.129020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.129391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.129495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.134229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.134556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.134593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.139329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.139581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.139655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.144638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.145070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.145103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.149632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.149981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.150015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.154777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.155111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.155242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.159881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.160102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.160247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.164898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.165101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.165215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.170020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.170456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.170537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.175238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.175795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.175830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.180536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.181047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.181080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.185640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.186071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.186110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.190856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.191433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.191536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.195964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.196397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.196443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.201153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.201469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.201503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.206396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.206796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.211809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.212096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.217620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.217907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.217941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.224108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.224288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.224371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.229728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.230199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.230244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.234838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.235186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.235267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.240000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.240507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.240541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.245070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.245493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.245564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.250391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.250714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.250800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.255523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.255989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.260773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.261104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.261186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.266115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.266567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.266659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.771 [2024-07-16 00:17:19.271437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.771 [2024-07-16 00:17:19.271792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.771 [2024-07-16 00:17:19.271882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.772 [2024-07-16 00:17:19.276796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.772 [2024-07-16 00:17:19.277084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.772 [2024-07-16 00:17:19.277117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.772 [2024-07-16 00:17:19.282218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:44.772 [2024-07-16 00:17:19.282526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.772 [2024-07-16 00:17:19.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.029 [2024-07-16 00:17:19.287372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.029 [2024-07-16 00:17:19.287767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.287811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.292851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.293224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.293264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.298000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.298302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.298356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.303226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.303507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.303551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.308515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.308765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.313544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.314185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.314219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.318419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.318899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.318946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.323522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.323994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.324093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.328499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.328902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.329024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.333523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.334055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.334106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.338691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.339188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.339261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.343856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.344465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.348835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.349378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.349412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.353835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.354491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.354624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.359155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.359722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.359841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.364161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.364405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.364474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.369569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.369706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.369740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.375373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.375684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.375717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.380641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.380910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.030 [2024-07-16 00:17:19.381035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.030 [2024-07-16 00:17:19.385637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.030 [2024-07-16 00:17:19.386097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.386132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.390927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.391332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.396148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.396569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.396629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.401294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.401500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.401533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.406551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.406889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.406924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.411699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.412118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.416935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.417418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.417453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.422156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.422429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.422524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.427264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.427738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.432682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.432874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.438258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.438635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.438669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.443340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.443664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.443697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.448507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.449005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.449039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.453942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.454372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.454422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.458944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.459263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.459315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.464146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.464406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.464493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.469332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.469557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.469606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.474450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.474917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.474950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.479691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.480186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.480219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.031 [2024-07-16 00:17:19.484749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.031 [2024-07-16 00:17:19.485122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.031 [2024-07-16 00:17:19.485172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.489884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.490415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.490449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.495021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.495487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.495544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.500285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.500551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.500696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.505468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.505646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.505689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.510692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.510944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.511025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.515895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.516270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.516304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.521166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.521633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.521688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.526282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.526588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.526740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.531554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.532038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.532145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.536781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.537034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.537127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.032 [2024-07-16 00:17:19.542037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.032 [2024-07-16 00:17:19.542426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.032 [2024-07-16 00:17:19.542482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.547205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.547482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.547518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.552498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.552710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.552787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.557659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.557829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.562907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.563292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.563450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.568133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.568585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.568669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.573415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.573591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.573640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.578580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.578993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.583790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.583963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.584003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.589067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.589276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.589404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.594170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.594629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.594661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.599367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.599703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.599736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.604588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.604732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.604771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.610170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.610261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.610333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.615742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.616306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.616378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.620730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.621173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.621207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.625891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.626358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.626439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.630889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.631331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.631407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.635876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.636386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.636449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.640909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.641498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.641601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.645929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.646506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.646585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.650978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.651429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.651508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.655898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.656267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.292 [2024-07-16 00:17:19.656316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.292 [2024-07-16 00:17:19.660867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.292 [2024-07-16 00:17:19.661420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.661468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.665797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.666215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.666307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.670749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.671190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.671233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.675821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.676333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.676440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.680890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.681419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.681466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.685875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.686297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.686414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.691016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.691411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.691445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.696032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.696609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.696656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.700912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.701432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.701530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.705955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.706399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.710911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.711320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.711400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.715957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.716552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.716642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.721054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.721641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.721729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.726151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.726603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.726649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.731161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.731641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.731678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.736254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.736788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.736823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.741157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.741526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.741567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.746207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.746614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.751155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.751794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.751855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.756271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.756872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.761271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.761710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.761771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.766281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.766890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.766939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.771366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.771749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.771830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.776466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.776910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.777014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.781395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.781868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.786492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.786897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.786938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.791445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.791918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.293 [2024-07-16 00:17:19.792043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.293 [2024-07-16 00:17:19.796439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.293 [2024-07-16 00:17:19.796928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.294 [2024-07-16 00:17:19.797040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.294 [2024-07-16 00:17:19.801546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.294 [2024-07-16 00:17:19.801995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.294 [2024-07-16 00:17:19.802082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.806492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.807025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.807123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.811577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.812115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.812201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.816542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.817077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.821548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.822019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.822072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.826428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.826832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.826905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.831521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.832053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.832199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.836512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.837079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.837172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.841457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.842014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.842150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.846482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.846971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.847070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.851481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.851835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.851926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.856495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.856816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.861478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.861922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.861969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.866371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.866932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.867014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.871339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.871786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.871869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.876361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.876865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.876904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.881403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.882079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.886339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.886745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.886824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.891315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.891601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.891675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.896153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.896599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.896633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.901226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.901641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.901679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.906106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.906643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.906741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.911184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.911687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.911772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.916178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.916653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.916784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.921376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.921903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.926518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.927120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.927218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.931522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.932105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.932164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.936589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.937103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.937210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.941583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.553 [2024-07-16 00:17:19.942179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.553 [2024-07-16 00:17:19.942221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.553 [2024-07-16 00:17:19.946712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.947156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.947208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.951736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.952242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.956673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.957307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.957345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.961696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.962097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.962147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.966712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.967412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.971710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.972324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.972410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.976834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.977432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.977492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.981792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.982327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.982374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.986892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.987420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.987541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.991912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.992384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.992419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:19.997039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:19.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:19.997598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.002199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.002631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.002723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.007351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.007844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.007895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.012344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.012818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.012870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.017213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.017769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.017825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.022283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.022845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.023026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.027468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.027987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.028039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.032544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.033106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.033225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.037511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.037941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.037993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.042755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.043254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.049129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.049570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.049681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.054107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.054645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.059151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.059678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.059729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.554 [2024-07-16 00:17:20.064239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.554 [2024-07-16 00:17:20.064849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.554 [2024-07-16 00:17:20.064969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.069321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.069958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.070006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.074232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.074908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.074942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.079194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.079696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.079864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.084204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.084789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.084866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.089183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.089771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.089806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.094119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.094707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.099227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.099753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.099788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.104240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.104857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.104935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.109439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.110034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.110068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.114338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.114929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.119419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.119948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.119982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.124539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.124978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.125077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.129500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.129922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.129969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.134501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.135115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.135240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.139634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.140064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.140098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.144458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.144992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.145123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.149579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.150383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.150417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.154867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.155414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.155448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.160106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.160332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.160387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.814 [2024-07-16 00:17:20.165189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.814 [2024-07-16 00:17:20.165590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.814 [2024-07-16 00:17:20.165630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.170255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.170690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.170781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.175301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.175784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.175840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.180435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.181049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.181123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.185481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.185920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.185966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.190533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.190921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.190994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.195369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.195806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.195843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.200353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.200834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.200920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.205352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.205909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.205966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.210368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.210894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.215295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.215594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.215691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.220229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.225237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.225655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.225729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.230243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.230639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.230750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.235350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.235682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.235783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.240318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.240768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.240819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.245418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.245824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.245965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.250632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.250774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.250816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.255838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.256127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.256168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.261398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.261675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.261709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.266683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.266922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.272026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.272347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.272381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.277348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.277609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.282791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.283073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.283204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.288002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.288274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.288308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.293108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.293318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.293498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.298228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.298479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.298624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.303423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.303948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.303983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.308596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.309071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.309116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.815 [2024-07-16 00:17:20.314082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.815 [2024-07-16 00:17:20.314389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.815 [2024-07-16 00:17:20.314423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.816 [2024-07-16 00:17:20.319174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.816 [2024-07-16 00:17:20.319604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.816 [2024-07-16 00:17:20.319645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.816 [2024-07-16 00:17:20.324332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:45.816 [2024-07-16 00:17:20.324741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.816 [2024-07-16 00:17:20.324776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.074 [2024-07-16 00:17:20.329463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.074 [2024-07-16 00:17:20.329818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.074 [2024-07-16 00:17:20.329918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.074 [2024-07-16 00:17:20.334633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.074 [2024-07-16 00:17:20.334985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.074 [2024-07-16 00:17:20.335021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.074 [2024-07-16 00:17:20.339744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.074 [2024-07-16 00:17:20.340072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.074 [2024-07-16 00:17:20.340125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.074 [2024-07-16 00:17:20.344876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.345364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.345400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.350170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.350510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.355306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.355717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.355750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.360496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.360728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.360837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.365828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.366073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.366191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.370911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.371075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.371131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.375845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.376255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.376287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.381041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.381502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.381539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.385965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.386201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.386239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.391065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.391279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.391321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.396109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.396449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.396482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.401427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.401785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.401866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.406501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.406874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.407026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.411902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.412152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.412187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.416928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.417352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.417386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.422090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.422377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.422473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.427263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.427654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.427727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.432462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.432821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.432942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.437954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.438104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.438165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.443192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.443615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.448408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.448766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.448801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.453753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.453864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.453898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.458869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.459064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.459155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.464058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.464333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.469310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.469630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.469663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.474487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.474837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.479687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.479980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.480086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.484789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.485162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.485210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.489987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.075 [2024-07-16 00:17:20.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.075 [2024-07-16 00:17:20.490364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.075 [2024-07-16 00:17:20.495114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.495530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.495571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.500274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.500836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.500870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.505587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.505923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.510861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.511276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.511314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.515934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.516386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.516419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.521442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.521834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.521906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.526682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.526921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.532007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.532287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.532449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.537189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.537494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.537727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.542200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.542564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.547564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.547830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.547886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.552613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.553012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.557648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.558060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.558093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.562889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.563483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.563560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.568044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.573178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.573365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.573404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.578390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.578637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.578735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.076 [2024-07-16 00:17:20.583630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.076 [2024-07-16 00:17:20.583816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.076 [2024-07-16 00:17:20.583912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.588953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.589224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.589386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.594126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.594481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.594525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.599487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.599863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.599952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.604626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.604844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.610228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.610661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.610695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.615412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.615886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.615979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.620985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.621198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.621237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.626786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.627037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.627082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.632585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.632777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.632856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.638619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.638847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.638882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.644429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.644683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.644716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.650331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.650550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.650615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.656298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.656515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.656558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.662105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.662372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.668125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.334 [2024-07-16 00:17:20.668331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.334 [2024-07-16 00:17:20.668366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.334 [2024-07-16 00:17:20.674102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.674394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.674439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.679680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.679992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.680036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.685651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.685952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.685989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.691522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.691915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.697317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.697626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.697660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.703233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.703522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.703603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.708459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.708888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.713587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.713948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.714047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.719176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.719349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.719402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.725369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.725909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.726032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.730418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.730978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.731104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.735572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.736217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.736251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.740575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.740990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.745661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.746251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.746309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.751041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.751368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.751402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.756136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.756611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.756664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.761131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.761347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.761433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.766338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.766533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.766663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.771619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.772054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.772087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.776813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.777233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.777318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.781878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.782276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.782371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.787178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.787522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.787567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.792451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.792873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.792907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.797678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.798066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.798246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.802832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.803194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.803297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.808022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.808312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.808347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.813391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.813542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.813576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.818394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.335 [2024-07-16 00:17:20.818812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.335 [2024-07-16 00:17:20.818876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.335 [2024-07-16 00:17:20.823575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.336 [2024-07-16 00:17:20.823920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.336 [2024-07-16 00:17:20.824028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.336 [2024-07-16 00:17:20.828681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.336 [2024-07-16 00:17:20.828950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.336 [2024-07-16 00:17:20.829157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.336 [2024-07-16 00:17:20.834006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.336 [2024-07-16 00:17:20.834151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.336 [2024-07-16 00:17:20.834192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.336 [2024-07-16 00:17:20.839298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.336 [2024-07-16 00:17:20.839717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.336 [2024-07-16 00:17:20.839751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.336 [2024-07-16 00:17:20.844239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.336 [2024-07-16 00:17:20.844518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.336 [2024-07-16 00:17:20.844623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.593 [2024-07-16 00:17:20.849363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.593 [2024-07-16 00:17:20.849533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.593 [2024-07-16 00:17:20.849633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.593 [2024-07-16 00:17:20.854715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.593 [2024-07-16 00:17:20.854930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.593 [2024-07-16 00:17:20.854965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.593 [2024-07-16 00:17:20.859809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.593 [2024-07-16 00:17:20.860132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.593 [2024-07-16 00:17:20.860221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.593 [2024-07-16 00:17:20.864937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.593 [2024-07-16 00:17:20.865256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.865304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.870033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.870389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.875351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.875687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.875722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.880736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.880906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.880948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.885799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.885963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.886019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.890951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.891399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.891434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.896078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.896509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.896543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.901345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.901702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.906569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.906799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.906840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.911876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.912040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.912128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.594 [2024-07-16 00:17:20.916933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f6fb30) with pdu=0x2000190fef90 00:33:46.594 [2024-07-16 00:17:20.917364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.594 [2024-07-16 00:17:20.917419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:46.594 00:33:46.594 Latency(us) 00:33:46.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.594 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:46.594 nvme0n1 : 2.00 5973.82 746.73 0.00 0.00 2664.92 1978.22 8398.32 00:33:46.594 =================================================================================================================== 00:33:46.594 Total : 5973.82 746.73 0.00 0.00 2664.92 1978.22 8398.32 00:33:46.594 0 00:33:46.594 00:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:46.594 00:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:46.594 00:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:46.594 | .driver_specific 00:33:46.594 | .nvme_error 00:33:46.594 | .status_code 00:33:46.594 | .command_transient_transport_error' 00:33:46.594 00:17:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1378692 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1378692 ']' 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1378692 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1378692 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1378692' 00:33:46.852 killing process with pid 1378692 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1378692 00:33:46.852 Received shutdown signal, test time was about 2.000000 seconds 00:33:46.852 00:33:46.852 Latency(us) 00:33:46.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.852 =================================================================================================================== 00:33:46.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.852 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1378692 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1377691 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1377691 ']' 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1377691 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1377691 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1377691' 00:33:47.111 killing process with pid 1377691 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1377691 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1377691 00:33:47.111 00:33:47.111 real 0m15.253s 00:33:47.111 user 0m30.101s 00:33:47.111 sys 0m4.221s 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.111 ************************************ 00:33:47.111 END TEST nvmf_digest_error 00:33:47.111 ************************************ 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:47.111 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:47.111 rmmod nvme_tcp 00:33:47.370 rmmod nvme_fabrics 00:33:47.370 rmmod nvme_keyring 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1377691 ']' 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1377691 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1377691 ']' 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1377691 00:33:47.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1377691) - No such process 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1377691 is not found' 00:33:47.370 Process with pid 1377691 is not found 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:47.370 00:17:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.272 00:17:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:49.272 00:33:49.272 real 0m34.751s 00:33:49.272 user 1m1.994s 00:33:49.272 sys 0m9.845s 00:33:49.272 00:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:49.272 00:17:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:49.272 ************************************ 00:33:49.272 END TEST nvmf_digest 00:33:49.272 ************************************ 00:33:49.272 00:17:23 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:49.272 00:17:23 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:49.272 00:17:23 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:49.272 00:17:23 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:49.272 00:17:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:49.272 00:17:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:49.272 00:17:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.272 ************************************ 00:33:49.272 START TEST nvmf_bdevperf 00:33:49.272 ************************************ 00:33:49.272 00:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:49.530 * Looking for test storage... 00:33:49.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.530 00:17:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:49.531 00:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:50.910 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:50.910 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.910 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:50.911 Found net devices under 0000:08:00.0: cvl_0_0 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:50.911 Found net devices under 0000:08:00.1: cvl_0_1 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:50.911 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:51.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:33:51.170 00:33:51.170 --- 10.0.0.2 ping statistics --- 00:33:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.170 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:33:51.170 00:33:51.170 --- 10.0.0.1 ping statistics --- 00:33:51.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.170 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1380479 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1380479 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1380479 ']' 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:51.170 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.170 [2024-07-16 00:17:25.543501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:51.170 [2024-07-16 00:17:25.543590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:51.170 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.170 [2024-07-16 00:17:25.609617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:51.429 [2024-07-16 00:17:25.700567] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:51.429 [2024-07-16 00:17:25.700623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:51.429 [2024-07-16 00:17:25.700640] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:51.429 [2024-07-16 00:17:25.700653] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:51.429 [2024-07-16 00:17:25.700665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:51.429 [2024-07-16 00:17:25.700748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:51.429 [2024-07-16 00:17:25.701073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:51.429 [2024-07-16 00:17:25.701077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 [2024-07-16 00:17:25.820224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 Malloc0 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:51.429 [2024-07-16 00:17:25.874933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:51.429 { 00:33:51.429 "params": { 00:33:51.429 "name": "Nvme$subsystem", 00:33:51.429 "trtype": "$TEST_TRANSPORT", 00:33:51.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:51.429 "adrfam": "ipv4", 00:33:51.429 "trsvcid": "$NVMF_PORT", 00:33:51.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:51.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:51.429 "hdgst": ${hdgst:-false}, 00:33:51.429 "ddgst": ${ddgst:-false} 00:33:51.429 }, 00:33:51.429 "method": "bdev_nvme_attach_controller" 00:33:51.429 } 00:33:51.429 EOF 00:33:51.429 )") 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:51.429 00:17:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:51.429 "params": { 00:33:51.429 "name": "Nvme1", 00:33:51.429 "trtype": "tcp", 00:33:51.429 "traddr": "10.0.0.2", 00:33:51.429 "adrfam": "ipv4", 00:33:51.429 "trsvcid": "4420", 00:33:51.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:51.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:51.429 "hdgst": false, 00:33:51.429 "ddgst": false 00:33:51.429 }, 00:33:51.429 "method": "bdev_nvme_attach_controller" 00:33:51.429 }' 00:33:51.429 [2024-07-16 00:17:25.924057] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:51.429 [2024-07-16 00:17:25.924167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380582 ] 00:33:51.687 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.687 [2024-07-16 00:17:25.983985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.687 [2024-07-16 00:17:26.071296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.945 Running I/O for 1 seconds... 00:33:52.880 00:33:52.880 Latency(us) 00:33:52.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:52.880 Verification LBA range: start 0x0 length 0x4000 00:33:52.880 Nvme1n1 : 1.01 7826.33 30.57 0.00 0.00 16260.47 2645.71 17379.18 00:33:52.880 =================================================================================================================== 00:33:52.880 Total : 7826.33 30.57 0.00 0.00 16260.47 2645.71 17379.18 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1380684 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:53.138 { 00:33:53.138 "params": { 00:33:53.138 "name": "Nvme$subsystem", 00:33:53.138 "trtype": "$TEST_TRANSPORT", 00:33:53.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.138 "adrfam": "ipv4", 00:33:53.138 "trsvcid": "$NVMF_PORT", 00:33:53.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.138 "hdgst": ${hdgst:-false}, 00:33:53.138 "ddgst": ${ddgst:-false} 00:33:53.138 }, 00:33:53.138 "method": "bdev_nvme_attach_controller" 00:33:53.138 } 00:33:53.138 EOF 00:33:53.138 )") 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:53.138 00:17:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:53.138 "params": { 00:33:53.138 "name": "Nvme1", 00:33:53.138 "trtype": "tcp", 00:33:53.138 "traddr": "10.0.0.2", 00:33:53.138 "adrfam": "ipv4", 00:33:53.138 "trsvcid": "4420", 00:33:53.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.138 "hdgst": false, 00:33:53.138 "ddgst": false 00:33:53.138 }, 00:33:53.138 "method": "bdev_nvme_attach_controller" 00:33:53.138 }' 00:33:53.138 [2024-07-16 00:17:27.460705] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:53.138 [2024-07-16 00:17:27.460801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380684 ] 00:33:53.138 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.138 [2024-07-16 00:17:27.521801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.138 [2024-07-16 00:17:27.610455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.703 Running I/O for 15 seconds... 00:33:56.233 00:17:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1380479 00:33:56.233 00:17:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:56.233 [2024-07-16 00:17:30.430250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.233 [2024-07-16 00:17:30.430306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.430999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.233 [2024-07-16 00:17:30.431309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.233 [2024-07-16 00:17:30.431324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.431973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.431990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.234 [2024-07-16 00:17:30.432252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.234 [2024-07-16 00:17:30.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.234 [2024-07-16 00:17:30.432432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.432979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.432996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.235 [2024-07-16 00:17:30.433529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.235 [2024-07-16 00:17:30.433547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.433969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.433986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.434001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.434066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.236 [2024-07-16 00:17:30.434099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.236 [2024-07-16 00:17:30.434577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1536aa0 is same with the state(5) to be set 00:33:56.236 [2024-07-16 00:17:30.434611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.236 [2024-07-16 00:17:30.434623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.236 [2024-07-16 00:17:30.434636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:8 PRP1 0x0 PRP2 0x0 00:33:56.236 [2024-07-16 00:17:30.434651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.236 [2024-07-16 00:17:30.434706] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1536aa0 was disconnected and freed. reset controller. 00:33:56.236 [2024-07-16 00:17:30.438950] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.236 [2024-07-16 00:17:30.439020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.439811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.439843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.439861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.440126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.440405] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.440428] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.440445] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.444477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.453826] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.454369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.454422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.454442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.454713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.454982] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.455004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.455019] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.459093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.468213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.468694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.468754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.468773] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.469044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.469324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.469348] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.469363] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.473431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.482769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.483262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.483308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.483328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.483599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.483868] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.483890] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.483905] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.487986] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.497142] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.497665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.497706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.497726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.497997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.498278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.498301] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.498316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.502381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.511738] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.512283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.512325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.512344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.512615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.512884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.512906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.512922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.517008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.526158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.526681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.526722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.526742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.527013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.527305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.527328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.527344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.531415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.540520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.541003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.541053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.541071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.541347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.541615] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.541637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.237 [2024-07-16 00:17:30.541652] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.237 [2024-07-16 00:17:30.545749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.237 [2024-07-16 00:17:30.554932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.237 [2024-07-16 00:17:30.555389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.237 [2024-07-16 00:17:30.555437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.237 [2024-07-16 00:17:30.555455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.237 [2024-07-16 00:17:30.555719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.237 [2024-07-16 00:17:30.555986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.237 [2024-07-16 00:17:30.556008] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.556023] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.560126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.569502] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.569969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.569998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.570015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.570290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.570557] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.570579] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.570594] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.574679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.584065] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.584508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.584579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.584599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.584870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.585152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.585174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.585190] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.589302] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.598486] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.599011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.599053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.599073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.599356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.599626] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.599648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.599663] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.603726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.612894] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.613371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.613423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.613441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.613706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.613973] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.613995] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.614010] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.618091] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.627267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.627805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.627846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.627873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.628157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.628426] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.628448] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.238 [2024-07-16 00:17:30.628465] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.238 [2024-07-16 00:17:30.632528] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.238 [2024-07-16 00:17:30.641676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.238 [2024-07-16 00:17:30.642187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.238 [2024-07-16 00:17:30.642229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.238 [2024-07-16 00:17:30.642249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.238 [2024-07-16 00:17:30.642520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.238 [2024-07-16 00:17:30.642789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.238 [2024-07-16 00:17:30.642811] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.642826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.646933] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.656088] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.656670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.656711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.656731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.657002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.657285] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.657307] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.657323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.661412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.670512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.671055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.671097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.671117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.671398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.671667] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.671695] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.671711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.675817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.684969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.685446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.685496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.685514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.685778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.686045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.686067] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.686083] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.690145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.699455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.699930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.699959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.699977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.700250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.700517] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.700539] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.700554] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.704600] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.713907] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.714308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.714337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.714355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.714619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.714886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.714907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.714923] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.718981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.728491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.728965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.728995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.729012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.729287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.729555] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.729577] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.729592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.239 [2024-07-16 00:17:30.733651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.239 [2024-07-16 00:17:30.743011] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.239 [2024-07-16 00:17:30.743499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.239 [2024-07-16 00:17:30.743529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.239 [2024-07-16 00:17:30.743546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.239 [2024-07-16 00:17:30.743810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.239 [2024-07-16 00:17:30.744076] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.239 [2024-07-16 00:17:30.744098] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.239 [2024-07-16 00:17:30.744114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.499 [2024-07-16 00:17:30.748177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.499 [2024-07-16 00:17:30.757525] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.499 [2024-07-16 00:17:30.757972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.499 [2024-07-16 00:17:30.758012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.499 [2024-07-16 00:17:30.758032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.499 [2024-07-16 00:17:30.758315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.499 [2024-07-16 00:17:30.758584] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.499 [2024-07-16 00:17:30.758606] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.499 [2024-07-16 00:17:30.758621] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.499 [2024-07-16 00:17:30.762686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.499 [2024-07-16 00:17:30.772102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.499 [2024-07-16 00:17:30.772655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.499 [2024-07-16 00:17:30.772696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.499 [2024-07-16 00:17:30.772716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.499 [2024-07-16 00:17:30.772994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.499 [2024-07-16 00:17:30.773276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.499 [2024-07-16 00:17:30.773299] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.499 [2024-07-16 00:17:30.773314] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.499 [2024-07-16 00:17:30.777400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.499 [2024-07-16 00:17:30.786547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.499 [2024-07-16 00:17:30.787045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.499 [2024-07-16 00:17:30.787086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.499 [2024-07-16 00:17:30.787105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.499 [2024-07-16 00:17:30.787387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.499 [2024-07-16 00:17:30.787657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.499 [2024-07-16 00:17:30.787679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.499 [2024-07-16 00:17:30.787695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.499 [2024-07-16 00:17:30.791750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.499 [2024-07-16 00:17:30.801135] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.499 [2024-07-16 00:17:30.801613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.499 [2024-07-16 00:17:30.801654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.499 [2024-07-16 00:17:30.801674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.801945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.802226] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.802250] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.802265] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.806326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.815747] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.816262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.816304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.816323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.816594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.816862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.816884] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.816906] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.820977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.830330] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.830826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.830878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.830895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.831170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.831439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.831460] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.831475] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.835558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.844847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.845374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.845416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.845435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.845707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.845976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.845998] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.846013] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.850083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.859449] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.859897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.859928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.859945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.860223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.860491] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.860513] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.860528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.864602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.873970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.874459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.874489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.874507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.874771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.875038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.875060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.875081] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.879172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.888516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.888976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.889057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.889075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.889349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.889616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.889638] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.889653] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.893721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.902903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.903402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.903448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.903465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.903730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.903996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.904018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.904033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.908130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.500 [2024-07-16 00:17:30.917297] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.500 [2024-07-16 00:17:30.917752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.500 [2024-07-16 00:17:30.917839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.500 [2024-07-16 00:17:30.917857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.500 [2024-07-16 00:17:30.918128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.500 [2024-07-16 00:17:30.918406] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.500 [2024-07-16 00:17:30.918427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.500 [2024-07-16 00:17:30.918443] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.500 [2024-07-16 00:17:30.922518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:30.931682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:30.932210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:30.932252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:30.932271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:30.932542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:30.932811] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:30.932833] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:30.932848] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:30.936940] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:30.946269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:30.946743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:30.946774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:30.946791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:30.947055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:30.947334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:30.947357] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:30.947372] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:30.951431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:30.960810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:30.961316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:30.961358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:30.961377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:30.961648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:30.961916] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:30.961938] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:30.961967] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:30.966057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:30.975239] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:30.975659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:30.975690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:30.975709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:30.975976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:30.976255] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:30.976279] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:30.976295] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:30.980337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:30.989643] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:30.990192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:30.990234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:30.990254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:30.990524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:30.990793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:30.990816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:30.990832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:30.994880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.501 [2024-07-16 00:17:31.004188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.501 [2024-07-16 00:17:31.004632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.501 [2024-07-16 00:17:31.004673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.501 [2024-07-16 00:17:31.004693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.501 [2024-07-16 00:17:31.004964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.501 [2024-07-16 00:17:31.005244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.501 [2024-07-16 00:17:31.005269] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.501 [2024-07-16 00:17:31.005285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.501 [2024-07-16 00:17:31.009333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.761 [2024-07-16 00:17:31.018658] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-16 00:17:31.019248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.761 [2024-07-16 00:17:31.019295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.761 [2024-07-16 00:17:31.019316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.761 [2024-07-16 00:17:31.019588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.761 [2024-07-16 00:17:31.019856] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.761 [2024-07-16 00:17:31.019878] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.761 [2024-07-16 00:17:31.019894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.761 [2024-07-16 00:17:31.023939] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.761 [2024-07-16 00:17:31.033001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-16 00:17:31.033543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.761 [2024-07-16 00:17:31.033615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.761 [2024-07-16 00:17:31.033635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.761 [2024-07-16 00:17:31.033906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.761 [2024-07-16 00:17:31.034186] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.761 [2024-07-16 00:17:31.034209] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.761 [2024-07-16 00:17:31.034225] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.761 [2024-07-16 00:17:31.038267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.761 [2024-07-16 00:17:31.047560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-16 00:17:31.048040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.761 [2024-07-16 00:17:31.048072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.761 [2024-07-16 00:17:31.048090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.761 [2024-07-16 00:17:31.048372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.761 [2024-07-16 00:17:31.048639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.761 [2024-07-16 00:17:31.048662] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.761 [2024-07-16 00:17:31.048677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.761 [2024-07-16 00:17:31.052719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.761 [2024-07-16 00:17:31.062029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-16 00:17:31.062536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.761 [2024-07-16 00:17:31.062577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.761 [2024-07-16 00:17:31.062597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.761 [2024-07-16 00:17:31.062868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.761 [2024-07-16 00:17:31.063158] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.761 [2024-07-16 00:17:31.063190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.761 [2024-07-16 00:17:31.063206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.761 [2024-07-16 00:17:31.067268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.761 [2024-07-16 00:17:31.076588] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.761 [2024-07-16 00:17:31.077071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.761 [2024-07-16 00:17:31.077102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.761 [2024-07-16 00:17:31.077120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.761 [2024-07-16 00:17:31.077394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.761 [2024-07-16 00:17:31.077667] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.077689] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.077705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.081759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.091082] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.091533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.091563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.091581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.091846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.092113] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.092135] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.092161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.096219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.105633] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.106073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.106103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.106120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.106395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.106662] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.106684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.106699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.110796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.120259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.120743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.120773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.120790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.121053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.121332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.121355] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.121370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.125458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.134668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.135161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.135190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.135208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.135472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.135738] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.135760] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.135776] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.139838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.149215] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.149706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.149747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.149767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.150038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.150319] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.150342] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.150358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.154438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.163606] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.164102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.164151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.164180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.164458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.164727] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.164749] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.164764] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.168844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.178165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.178638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.178683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.178701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.178965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.179247] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.179270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.179285] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.183330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.192636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.192999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.193028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.193046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.193323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.193591] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.193613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.193628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.197676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.207011] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.207490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.207572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.207590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.207854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.208121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.208159] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.208175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.212230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.221580] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.222057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.222097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.222117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.222396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.222668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.762 [2024-07-16 00:17:31.222690] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.762 [2024-07-16 00:17:31.222705] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.762 [2024-07-16 00:17:31.226774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.762 [2024-07-16 00:17:31.236126] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.762 [2024-07-16 00:17:31.236650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.762 [2024-07-16 00:17:31.236692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.762 [2024-07-16 00:17:31.236711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.762 [2024-07-16 00:17:31.236982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.762 [2024-07-16 00:17:31.237263] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.763 [2024-07-16 00:17:31.237287] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.763 [2024-07-16 00:17:31.237302] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.763 [2024-07-16 00:17:31.241357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.763 [2024-07-16 00:17:31.250711] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.763 [2024-07-16 00:17:31.251202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.763 [2024-07-16 00:17:31.251263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.763 [2024-07-16 00:17:31.251282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.763 [2024-07-16 00:17:31.251553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.763 [2024-07-16 00:17:31.251822] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.763 [2024-07-16 00:17:31.251844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.763 [2024-07-16 00:17:31.251860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.763 [2024-07-16 00:17:31.255921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:56.763 [2024-07-16 00:17:31.265267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:56.763 [2024-07-16 00:17:31.265747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:56.763 [2024-07-16 00:17:31.265789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:56.763 [2024-07-16 00:17:31.265809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:56.763 [2024-07-16 00:17:31.266080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:56.763 [2024-07-16 00:17:31.266360] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:56.763 [2024-07-16 00:17:31.266383] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:56.763 [2024-07-16 00:17:31.266399] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:56.763 [2024-07-16 00:17:31.270455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.022 [2024-07-16 00:17:31.279751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.022 [2024-07-16 00:17:31.280237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.022 [2024-07-16 00:17:31.280269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.022 [2024-07-16 00:17:31.280287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.022 [2024-07-16 00:17:31.280553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.022 [2024-07-16 00:17:31.280820] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.022 [2024-07-16 00:17:31.280849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.022 [2024-07-16 00:17:31.280873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.022 [2024-07-16 00:17:31.284942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.022 [2024-07-16 00:17:31.294307] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.022 [2024-07-16 00:17:31.294840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.022 [2024-07-16 00:17:31.294882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.022 [2024-07-16 00:17:31.294901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.022 [2024-07-16 00:17:31.295185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.022 [2024-07-16 00:17:31.295454] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.022 [2024-07-16 00:17:31.295477] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.022 [2024-07-16 00:17:31.295492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.022 [2024-07-16 00:17:31.299545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.022 [2024-07-16 00:17:31.308883] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.022 [2024-07-16 00:17:31.309385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.022 [2024-07-16 00:17:31.309427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.022 [2024-07-16 00:17:31.309446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.022 [2024-07-16 00:17:31.309725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.022 [2024-07-16 00:17:31.309993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.022 [2024-07-16 00:17:31.310015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.022 [2024-07-16 00:17:31.310031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.022 [2024-07-16 00:17:31.314096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.022 [2024-07-16 00:17:31.323431] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.022 [2024-07-16 00:17:31.323955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.022 [2024-07-16 00:17:31.323996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.022 [2024-07-16 00:17:31.324016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.022 [2024-07-16 00:17:31.324302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.022 [2024-07-16 00:17:31.324572] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.324594] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.324609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.328671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.337953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.338451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.338493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.338512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.338789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.339058] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.339080] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.339095] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.343160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.352532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.352926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.352957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.352975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.353255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.353524] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.353546] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.353568] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.357644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.366972] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.367427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.367468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.367487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.367758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.368026] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.368049] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.368064] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.372131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.381492] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.381976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.382016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.382035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.382319] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.382589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.382611] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.382626] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.386698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.396012] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.396490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.396521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.396539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.396803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.397070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.397092] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.397107] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.401175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.410522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.410972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.411025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.411043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.411316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.411583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.411605] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.411620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.415706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.425040] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.425496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.425550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.425568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.425831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.426098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.426120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.426136] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.430204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.439528] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.440017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.440045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.440063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.440338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.440605] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.440627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.440642] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.444724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.454031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.454539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.454580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.023 [2024-07-16 00:17:31.454600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.023 [2024-07-16 00:17:31.454872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.023 [2024-07-16 00:17:31.455291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.023 [2024-07-16 00:17:31.455315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.023 [2024-07-16 00:17:31.455330] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.023 [2024-07-16 00:17:31.459399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.023 [2024-07-16 00:17:31.468475] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.023 [2024-07-16 00:17:31.469011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.023 [2024-07-16 00:17:31.469053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-07-16 00:17:31.469072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.024 [2024-07-16 00:17:31.469356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.024 [2024-07-16 00:17:31.469625] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.024 [2024-07-16 00:17:31.469647] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.024 [2024-07-16 00:17:31.469662] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.024 [2024-07-16 00:17:31.473731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.024 [2024-07-16 00:17:31.482887] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.024 [2024-07-16 00:17:31.483373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.024 [2024-07-16 00:17:31.483423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-07-16 00:17:31.483441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.024 [2024-07-16 00:17:31.483705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.024 [2024-07-16 00:17:31.483972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.024 [2024-07-16 00:17:31.483994] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.024 [2024-07-16 00:17:31.484009] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.024 [2024-07-16 00:17:31.488072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.024 [2024-07-16 00:17:31.497443] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.024 [2024-07-16 00:17:31.497964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.024 [2024-07-16 00:17:31.498004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-07-16 00:17:31.498024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.024 [2024-07-16 00:17:31.498310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.024 [2024-07-16 00:17:31.498580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.024 [2024-07-16 00:17:31.498602] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.024 [2024-07-16 00:17:31.498618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.024 [2024-07-16 00:17:31.502681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.024 [2024-07-16 00:17:31.511805] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.024 [2024-07-16 00:17:31.512273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.024 [2024-07-16 00:17:31.512322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-07-16 00:17:31.512340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.024 [2024-07-16 00:17:31.512605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.024 [2024-07-16 00:17:31.512873] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.024 [2024-07-16 00:17:31.512894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.024 [2024-07-16 00:17:31.512910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.024 [2024-07-16 00:17:31.516964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.024 [2024-07-16 00:17:31.526304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.024 [2024-07-16 00:17:31.526790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.024 [2024-07-16 00:17:31.526842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.024 [2024-07-16 00:17:31.526860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.024 [2024-07-16 00:17:31.527124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.024 [2024-07-16 00:17:31.527400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.024 [2024-07-16 00:17:31.527422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.024 [2024-07-16 00:17:31.527437] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.024 [2024-07-16 00:17:31.531492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.540815] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.541264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.541329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.541349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.541619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.541889] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.541910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.541926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.546005] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.555351] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.555812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.555862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.555886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.556161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.556429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.556452] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.556467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.560559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.569906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.570346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.570387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.570407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.570678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.570946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.570968] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.570984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.575074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.584427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.584960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.585002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.585022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.585311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.585580] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.585602] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.585618] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.589680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.598794] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.599206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.599265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.599283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.599549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.599821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.599844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.599859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.603938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.613331] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.613816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.613858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.613877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.614161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.614430] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.614452] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.614467] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.618543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.627880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.628384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.628426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.628445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.628716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.628984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.629006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.629022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.633098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.642478] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.642969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.643019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.643037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.643310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.643579] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.643601] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.643616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.647678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.657014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.657546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.657601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.284 [2024-07-16 00:17:31.657620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.284 [2024-07-16 00:17:31.657898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.284 [2024-07-16 00:17:31.658178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.284 [2024-07-16 00:17:31.658201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.284 [2024-07-16 00:17:31.658216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.284 [2024-07-16 00:17:31.662276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.284 [2024-07-16 00:17:31.671406] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.284 [2024-07-16 00:17:31.671881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.284 [2024-07-16 00:17:31.671935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.671955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.672239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.672508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.672530] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.672546] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.676620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.685983] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.686422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.686462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.686482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.686753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.687021] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.687043] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.687059] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.691134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.700493] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.701084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.701126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.701163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.701436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.701704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.701726] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.701742] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.705784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.714861] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.715380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.715435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.715455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.715725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.715994] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.716015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.716031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.720087] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.729440] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.729811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.729843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.729861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.730126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.730404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.730426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.730441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.734523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.743898] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.744424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.744465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.744485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.744761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.745029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.745066] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.745082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.749136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.758358] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.758877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.758918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.758938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.759222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.759492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.759514] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.759529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.763597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.772706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.773201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.773250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.773268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.773532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.773799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.773821] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.773836] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.777916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.285 [2024-07-16 00:17:31.787253] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.285 [2024-07-16 00:17:31.787729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.285 [2024-07-16 00:17:31.787770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.285 [2024-07-16 00:17:31.787789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.285 [2024-07-16 00:17:31.788060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.285 [2024-07-16 00:17:31.788340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.285 [2024-07-16 00:17:31.788364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.285 [2024-07-16 00:17:31.788379] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.285 [2024-07-16 00:17:31.792460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.545 [2024-07-16 00:17:31.801781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.545 [2024-07-16 00:17:31.802185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.545 [2024-07-16 00:17:31.802227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.545 [2024-07-16 00:17:31.802246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.545 [2024-07-16 00:17:31.802517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.545 [2024-07-16 00:17:31.802786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.545 [2024-07-16 00:17:31.802808] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.545 [2024-07-16 00:17:31.802823] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.545 [2024-07-16 00:17:31.806914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.545 [2024-07-16 00:17:31.816278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.545 [2024-07-16 00:17:31.816706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.545 [2024-07-16 00:17:31.816737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.545 [2024-07-16 00:17:31.816755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.545 [2024-07-16 00:17:31.817021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.545 [2024-07-16 00:17:31.817299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.545 [2024-07-16 00:17:31.817321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.545 [2024-07-16 00:17:31.817337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.545 [2024-07-16 00:17:31.821381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.545 [2024-07-16 00:17:31.830673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.545 [2024-07-16 00:17:31.831126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.545 [2024-07-16 00:17:31.831200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.545 [2024-07-16 00:17:31.831221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.545 [2024-07-16 00:17:31.831491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.545 [2024-07-16 00:17:31.831760] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.545 [2024-07-16 00:17:31.831781] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.545 [2024-07-16 00:17:31.831797] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.545 [2024-07-16 00:17:31.835854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.545 [2024-07-16 00:17:31.845193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.545 [2024-07-16 00:17:31.845714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.545 [2024-07-16 00:17:31.845755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.545 [2024-07-16 00:17:31.845775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.545 [2024-07-16 00:17:31.846052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.545 [2024-07-16 00:17:31.846334] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.545 [2024-07-16 00:17:31.846357] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.545 [2024-07-16 00:17:31.846373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.545 [2024-07-16 00:17:31.850434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.545 [2024-07-16 00:17:31.859780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.545 [2024-07-16 00:17:31.860237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.545 [2024-07-16 00:17:31.860287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.545 [2024-07-16 00:17:31.860306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.545 [2024-07-16 00:17:31.860571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.545 [2024-07-16 00:17:31.860839] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.545 [2024-07-16 00:17:31.860861] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.860876] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.864961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.874315] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.874789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.874818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.874836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.875100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.875376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.875398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.875414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.879462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.888804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.889297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.889338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.889358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.889629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.889897] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.889919] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.889949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.894033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.903177] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.903769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.903811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.903830] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.904102] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.904382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.904405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.904421] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.908494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.917574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.918059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.918115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.918135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.918420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.918689] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.918711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.918726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.922804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.932189] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.932731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.932773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.932792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.933063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.933344] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.933367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.933382] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.937459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.946576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.947012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.947070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.947088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.947364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.947632] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.947654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.947669] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.951734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.961070] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.961471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.961501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.961519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.961784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.962050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.962072] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.962087] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.966176] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.975507] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.975946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.975995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.976012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.976292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.976560] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.976582] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.976597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.980678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:31.989999] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:31.990485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:31.990536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:31.990553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:31.990817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:31.991093] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:31.991115] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:31.991148] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:31.995267] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:32.004461] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:32.004898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:32.004927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:32.004944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:32.005222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:32.005490] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.546 [2024-07-16 00:17:32.005512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.546 [2024-07-16 00:17:32.005528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.546 [2024-07-16 00:17:32.009611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.546 [2024-07-16 00:17:32.019003] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.546 [2024-07-16 00:17:32.019573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.546 [2024-07-16 00:17:32.019621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.546 [2024-07-16 00:17:32.019639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.546 [2024-07-16 00:17:32.019903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.546 [2024-07-16 00:17:32.020178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.547 [2024-07-16 00:17:32.020201] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.547 [2024-07-16 00:17:32.020216] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.547 [2024-07-16 00:17:32.024286] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.547 [2024-07-16 00:17:32.033438] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.547 [2024-07-16 00:17:32.033915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.547 [2024-07-16 00:17:32.033964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.547 [2024-07-16 00:17:32.033981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.547 [2024-07-16 00:17:32.034258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.547 [2024-07-16 00:17:32.034525] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.547 [2024-07-16 00:17:32.034547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.547 [2024-07-16 00:17:32.034562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.547 [2024-07-16 00:17:32.038625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.547 [2024-07-16 00:17:32.047948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.547 [2024-07-16 00:17:32.048435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.547 [2024-07-16 00:17:32.048491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.547 [2024-07-16 00:17:32.048511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.547 [2024-07-16 00:17:32.048782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.547 [2024-07-16 00:17:32.049051] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.547 [2024-07-16 00:17:32.049074] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.547 [2024-07-16 00:17:32.049089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.547 [2024-07-16 00:17:32.053145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.062516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.062953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.063003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.063021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.063298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.063567] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.063589] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.063604] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.067672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.077093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.077556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.077607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.077624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.077888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.078166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.078189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.078204] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.082308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.091479] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.091920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.091949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.091973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.092252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.092520] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.092542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.092557] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.096628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.105948] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.106360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.106402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.106422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.106701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.106970] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.106993] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.107008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.111057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.120363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.120894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.120936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.120955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.121239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.121508] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.121530] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.121546] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.125587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.134914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.135457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.135511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.135531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.135802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.136071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.136099] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.136115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.140165] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.149457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.149848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.149878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.149896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.150169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.150436] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.150458] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.150473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.154511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.163798] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.164163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.815 [2024-07-16 00:17:32.164194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.815 [2024-07-16 00:17:32.164212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.815 [2024-07-16 00:17:32.164476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.815 [2024-07-16 00:17:32.164744] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.815 [2024-07-16 00:17:32.164766] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.815 [2024-07-16 00:17:32.164781] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.815 [2024-07-16 00:17:32.168823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.815 [2024-07-16 00:17:32.178162] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.815 [2024-07-16 00:17:32.178629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.178669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.178689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.178960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.179241] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.179265] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.179280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.183351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.192684] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.193191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.193233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.193253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.193524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.193793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.193815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.193830] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.197884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.207205] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.207736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.207777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.207796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.208068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.208346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.208369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.208384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.212421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.221727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.222217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.222248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.222266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.222531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.222798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.222820] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.222835] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.226884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.236236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.236710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.236761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.236784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.237049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.237326] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.237348] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.237364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.241427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.250810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.251286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.251317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.251334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.251598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.251865] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.251887] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.251903] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.255991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.265400] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.265907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.265949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.265968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.266253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.266522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.266544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.266560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.270623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.279847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.280386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.280428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.280447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.280718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.280987] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.281022] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.281038] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.285145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.294291] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.294820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.294861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.294881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.295164] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.295434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.295456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.295471] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.299541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.308686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.309185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.309226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.309246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.309517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.309786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.309807] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.309823] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:57.816 [2024-07-16 00:17:32.313871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:57.816 [2024-07-16 00:17:32.323188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:57.816 [2024-07-16 00:17:32.323684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.816 [2024-07-16 00:17:32.323725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:57.816 [2024-07-16 00:17:32.323745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:57.816 [2024-07-16 00:17:32.324015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:57.816 [2024-07-16 00:17:32.324297] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:57.816 [2024-07-16 00:17:32.324320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:57.816 [2024-07-16 00:17:32.324336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.328393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.337731] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.338263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.338305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.338324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.338596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.338864] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.338886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.338901] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.342979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.352149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.352701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.352742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.352761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.353032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.353314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.353337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.353352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.357427] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.366578] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.367056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.367086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.367104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.367379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.367647] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.367669] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.367684] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.371751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.381211] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.381683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.381733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.381751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.382022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.382298] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.382321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.382336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.386398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.395821] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.396282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.396330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.396348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.396612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.396879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.396901] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.396916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.400979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.410427] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.410802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.410833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.410850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.411115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.411398] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.411423] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.411438] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.415514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.424863] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.425342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.425382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.425402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.078 [2024-07-16 00:17:32.425673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.078 [2024-07-16 00:17:32.425941] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.078 [2024-07-16 00:17:32.425963] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.078 [2024-07-16 00:17:32.425985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.078 [2024-07-16 00:17:32.430072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.078 [2024-07-16 00:17:32.439474] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.078 [2024-07-16 00:17:32.440002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.078 [2024-07-16 00:17:32.440042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.078 [2024-07-16 00:17:32.440062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.440345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.440613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.440635] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.440651] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.444751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.453886] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.454384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.454426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.454446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.454717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.454985] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.455007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.455022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.459097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.468439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.468943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.469005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.469025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.469307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.469582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.469604] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.469619] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.473679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.482921] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.483387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.483442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.483461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.483726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.483993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.484015] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.484029] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.488094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.497496] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.498030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.498071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.498090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.498373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.498643] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.498665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.498680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.502769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.511956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.512395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.512446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.512465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.512730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.512996] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.513018] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.513033] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.517120] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.526471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.526909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.526958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.526975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.527251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.527525] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.527547] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.527562] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.531640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.541080] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.541570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.541620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.541638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.541902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.542179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.542202] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.542223] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.546273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.555624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.556059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.556105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.556122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.556395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.556663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.556685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.079 [2024-07-16 00:17:32.556700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.079 [2024-07-16 00:17:32.560759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.079 [2024-07-16 00:17:32.570203] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.079 [2024-07-16 00:17:32.570594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.079 [2024-07-16 00:17:32.570624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.079 [2024-07-16 00:17:32.570641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.079 [2024-07-16 00:17:32.570906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.079 [2024-07-16 00:17:32.571183] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.079 [2024-07-16 00:17:32.571206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.080 [2024-07-16 00:17:32.571221] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.080 [2024-07-16 00:17:32.575296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.080 [2024-07-16 00:17:32.584720] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.080 [2024-07-16 00:17:32.585209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.080 [2024-07-16 00:17:32.585302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.080 [2024-07-16 00:17:32.585323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.080 [2024-07-16 00:17:32.585594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.080 [2024-07-16 00:17:32.585862] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.080 [2024-07-16 00:17:32.585884] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.080 [2024-07-16 00:17:32.585899] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.080 [2024-07-16 00:17:32.589952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.599299] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.599822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.599865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.599885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.600169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.600439] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.600461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.600476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.604562] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.613891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.614361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.614412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.614430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.614695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.614970] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.614992] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.615008] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.619080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.628472] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.628982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.629023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.629049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.629339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.629608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.629631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.629647] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.633755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.642889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.643403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.643444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.643464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.643735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.644003] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.644025] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.644040] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.648115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.657511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.657891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.657922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.657940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.658216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.658484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.658506] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.658521] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.662605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.671991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.672502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.672543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.672563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.672834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.673103] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.673151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.673170] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.677258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.686385] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.686843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.686903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.686920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.687195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.687463] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.687485] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.687500] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.691593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.700785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.701327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.701369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.701389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.701660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.701928] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.701950] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.701966] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.706038] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.339 [2024-07-16 00:17:32.715303] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.339 [2024-07-16 00:17:32.715896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-16 00:17:32.715937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.339 [2024-07-16 00:17:32.715957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.339 [2024-07-16 00:17:32.716242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.339 [2024-07-16 00:17:32.716511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.339 [2024-07-16 00:17:32.716533] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.339 [2024-07-16 00:17:32.716549] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.339 [2024-07-16 00:17:32.720606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.729715] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.730209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.730265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.730285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.730556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.730824] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.730846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.730861] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.734942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.744354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.744797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.744848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.744866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.745130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.745408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.745430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.745446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.749511] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.758933] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.759447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.759488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.759508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.759779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.760047] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.760069] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.760084] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.764185] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.773306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.773740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.773803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.773821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.774094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.774375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.774398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.774413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.778473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.787877] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.788367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.788415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.788433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.788697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.788973] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.788994] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.789010] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.793092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.802448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.802977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.803019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.803039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.803324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.803593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.803616] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.803631] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.807673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.817064] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.817542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.817573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.817590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.817854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.818121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.818154] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.818181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.822283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.831450] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.831933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.831981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.831999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.832273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.832540] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.832562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.832578] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.836646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.340 [2024-07-16 00:17:32.846024] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.340 [2024-07-16 00:17:32.846484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-16 00:17:32.846513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.340 [2024-07-16 00:17:32.846531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.340 [2024-07-16 00:17:32.846795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.340 [2024-07-16 00:17:32.847062] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.340 [2024-07-16 00:17:32.847084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.340 [2024-07-16 00:17:32.847099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.340 [2024-07-16 00:17:32.851175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.601 [2024-07-16 00:17:32.860512] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.601 [2024-07-16 00:17:32.860955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.601 [2024-07-16 00:17:32.861005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.601 [2024-07-16 00:17:32.861023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.601 [2024-07-16 00:17:32.861301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.601 [2024-07-16 00:17:32.861569] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.601 [2024-07-16 00:17:32.861591] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.601 [2024-07-16 00:17:32.861607] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.601 [2024-07-16 00:17:32.865667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.601 [2024-07-16 00:17:32.875036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.601 [2024-07-16 00:17:32.875503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.601 [2024-07-16 00:17:32.875553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.601 [2024-07-16 00:17:32.875570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.601 [2024-07-16 00:17:32.875834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.601 [2024-07-16 00:17:32.876101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.601 [2024-07-16 00:17:32.876122] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.601 [2024-07-16 00:17:32.876147] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.601 [2024-07-16 00:17:32.880224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.601 [2024-07-16 00:17:32.889661] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.601 [2024-07-16 00:17:32.890195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.601 [2024-07-16 00:17:32.890236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.601 [2024-07-16 00:17:32.890257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.601 [2024-07-16 00:17:32.890528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.601 [2024-07-16 00:17:32.890796] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.601 [2024-07-16 00:17:32.890818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.601 [2024-07-16 00:17:32.890833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.601 [2024-07-16 00:17:32.894921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.601 [2024-07-16 00:17:32.904051] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.601 [2024-07-16 00:17:32.904514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.601 [2024-07-16 00:17:32.904546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.601 [2024-07-16 00:17:32.904564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.601 [2024-07-16 00:17:32.904829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.601 [2024-07-16 00:17:32.905096] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.905118] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.905133] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.909223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.918635] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.919076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.919120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.919148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.919421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.919689] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.919711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.919727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.923786] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.933165] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.933651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.933700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.933717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.933980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.934259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.934281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.934296] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.938364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.947759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.948159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.948189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.948206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.948470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.948737] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.948759] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.948774] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.952880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.962269] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.962877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.962919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.962938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.963222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.963492] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.963514] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.963543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.967609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.976690] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.977220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.977262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.977282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.977552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.977821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.977843] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.977859] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.981915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:32.991072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:32.991547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:32.991596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:32.991614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:32.991878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:32.992155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:32.992178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:32.992193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:32.996238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:33.005537] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:33.005978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:33.006019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:33.006045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:33.006328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:33.006599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:33.006622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:33.006638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:33.010683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:33.020021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:33.020534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:33.020581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:33.020601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:33.020873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:33.021151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:33.021174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:33.021189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:33.025258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:33.034600] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:33.035067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:33.035117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:33.035135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:33.035411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:33.035678] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:33.035700] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:33.035715] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:33.039775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.602 [2024-07-16 00:17:33.049103] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.602 [2024-07-16 00:17:33.049631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.602 [2024-07-16 00:17:33.049685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.602 [2024-07-16 00:17:33.049705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.602 [2024-07-16 00:17:33.049976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.602 [2024-07-16 00:17:33.050265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.602 [2024-07-16 00:17:33.050289] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.602 [2024-07-16 00:17:33.050304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.602 [2024-07-16 00:17:33.054382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.603 [2024-07-16 00:17:33.063433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.603 [2024-07-16 00:17:33.063856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.603 [2024-07-16 00:17:33.063897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.603 [2024-07-16 00:17:33.063916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.603 [2024-07-16 00:17:33.064197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.603 [2024-07-16 00:17:33.064472] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.603 [2024-07-16 00:17:33.064495] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.603 [2024-07-16 00:17:33.064511] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.603 [2024-07-16 00:17:33.068553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.603 [2024-07-16 00:17:33.077914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.603 [2024-07-16 00:17:33.078361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.603 [2024-07-16 00:17:33.078412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.603 [2024-07-16 00:17:33.078429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.603 [2024-07-16 00:17:33.078694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.603 [2024-07-16 00:17:33.078969] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.603 [2024-07-16 00:17:33.078991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.603 [2024-07-16 00:17:33.079007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.603 [2024-07-16 00:17:33.083051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.603 [2024-07-16 00:17:33.092414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.603 [2024-07-16 00:17:33.092864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.603 [2024-07-16 00:17:33.092908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.603 [2024-07-16 00:17:33.092925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.603 [2024-07-16 00:17:33.093199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.603 [2024-07-16 00:17:33.093466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.603 [2024-07-16 00:17:33.093488] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.603 [2024-07-16 00:17:33.093503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.603 [2024-07-16 00:17:33.097581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.603 [2024-07-16 00:17:33.106926] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.603 [2024-07-16 00:17:33.107345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.603 [2024-07-16 00:17:33.107388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.603 [2024-07-16 00:17:33.107406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.603 [2024-07-16 00:17:33.107676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.603 [2024-07-16 00:17:33.107943] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.603 [2024-07-16 00:17:33.107964] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.603 [2024-07-16 00:17:33.107980] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.603 [2024-07-16 00:17:33.112053] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.121384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.121862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.121891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.121908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.122182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.122466] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.122489] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.122504] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.126555] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.135896] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.136387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.136437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.136455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.136719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.136986] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.137007] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.137022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.141076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.150439] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.151024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.151066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.151085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.151368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.151638] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.151659] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.151674] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.155742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.164849] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.165338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.165378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.165403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.165676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.165944] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.165966] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.165982] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.170040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.179404] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.179944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.179986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.180005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.180290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.180559] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.180581] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.180597] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.184658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.193748] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.194225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.194266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.194285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.194559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.194827] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.194849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.194865] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.198925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.208284] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.208796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.208838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.208858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.209130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.209408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.209437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.209453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.213497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.222783] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.223198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.223237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.223268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.223533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.223801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.223823] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.223838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.227877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.237190] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.861 [2024-07-16 00:17:33.237694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.861 [2024-07-16 00:17:33.237740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.861 [2024-07-16 00:17:33.237758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.861 [2024-07-16 00:17:33.238022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.861 [2024-07-16 00:17:33.238299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.861 [2024-07-16 00:17:33.238321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.861 [2024-07-16 00:17:33.238337] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.861 [2024-07-16 00:17:33.242373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.861 [2024-07-16 00:17:33.251662] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.252097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.252126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.252151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.252417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.252684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.252706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.252722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.256760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.266054] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.266487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.266529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.266547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.266811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.267078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.267100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.267116] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.271159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.280445] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.280883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.280912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.280929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.281202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.281469] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.281491] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.281506] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.285543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.294825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.295194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.295223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.295241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.295505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.295772] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.295795] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.295810] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.299847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.309377] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.309854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.309895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.309915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.310204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.310474] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.310496] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.310512] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.314565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.323887] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.324371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.324419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.324437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.324702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.324969] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.324991] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.325007] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.329061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.338413] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.338940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.338981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.339000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.339284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.339554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.339576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.339591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.343661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.352769] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.353257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.353299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.353319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.353589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.353858] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.353880] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.353902] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.357961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:58.862 [2024-07-16 00:17:33.367309] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.862 [2024-07-16 00:17:33.367810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.862 [2024-07-16 00:17:33.367851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:58.862 [2024-07-16 00:17:33.367870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:58.862 [2024-07-16 00:17:33.368152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:58.862 [2024-07-16 00:17:33.368422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:58.862 [2024-07-16 00:17:33.368444] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:58.862 [2024-07-16 00:17:33.368459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.862 [2024-07-16 00:17:33.372522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 [2024-07-16 00:17:33.381829] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.382287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.382348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.382368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.382639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.382907] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.382929] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.382945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.122 [2024-07-16 00:17:33.387002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 [2024-07-16 00:17:33.396343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.396846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.396899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.396919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.397208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.397478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.397499] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.397515] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.122 [2024-07-16 00:17:33.401580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 [2024-07-16 00:17:33.410693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.411181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.411212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.411230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.411495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.411761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.411783] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.411799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.122 [2024-07-16 00:17:33.415872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1380479 Killed "${NVMF_APP[@]}" "$@" 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.122 [2024-07-16 00:17:33.425187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.425604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.425646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.425665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.425942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.426222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.426245] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.426261] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1381253 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1381253 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1381253 ']' 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:59.122 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.122 [2024-07-16 00:17:33.430304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 [2024-07-16 00:17:33.439590] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.440014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.440053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.440072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.440346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.440613] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.440634] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.440650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.122 [2024-07-16 00:17:33.444688] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.122 [2024-07-16 00:17:33.453970] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.122 [2024-07-16 00:17:33.454393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.122 [2024-07-16 00:17:33.454423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.122 [2024-07-16 00:17:33.454440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.122 [2024-07-16 00:17:33.454704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.122 [2024-07-16 00:17:33.454972] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.122 [2024-07-16 00:17:33.454994] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.122 [2024-07-16 00:17:33.455010] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.459044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.468355] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.468849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.468882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.468901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.469181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.469451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.469473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.469489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.473525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.475011] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:59.123 [2024-07-16 00:17:33.475078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.123 [2024-07-16 00:17:33.482816] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.483218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.483260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.483288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.483563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.483832] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.483854] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.483870] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.487915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.497387] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.497781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.497822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.497842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.498113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.498391] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.498414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.498430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.502470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.123 [2024-07-16 00:17:33.511755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.512190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.512232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.512251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.512522] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.512792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.512814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.512831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.516878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.526172] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.526624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.526666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.526686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.526957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.527244] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.527267] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.527283] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.531323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.540224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:59.123 [2024-07-16 00:17:33.540610] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.541065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.541107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.541127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.541409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.541679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.541701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.541716] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.545806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.555008] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.123 [2024-07-16 00:17:33.555551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.123 [2024-07-16 00:17:33.555589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.123 [2024-07-16 00:17:33.555609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.123 [2024-07-16 00:17:33.555883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.123 [2024-07-16 00:17:33.556164] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.123 [2024-07-16 00:17:33.556187] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.123 [2024-07-16 00:17:33.556205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.123 [2024-07-16 00:17:33.560247] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.123 [2024-07-16 00:17:33.569545] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.124 [2024-07-16 00:17:33.570075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.124 [2024-07-16 00:17:33.570113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.124 [2024-07-16 00:17:33.570133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.124 [2024-07-16 00:17:33.570413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.124 [2024-07-16 00:17:33.570684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.124 [2024-07-16 00:17:33.570707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.124 [2024-07-16 00:17:33.570724] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.124 [2024-07-16 00:17:33.574780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.124 [2024-07-16 00:17:33.584079] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.124 [2024-07-16 00:17:33.584639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.124 [2024-07-16 00:17:33.584691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.124 [2024-07-16 00:17:33.584714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.124 [2024-07-16 00:17:33.584997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.124 [2024-07-16 00:17:33.585287] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.124 [2024-07-16 00:17:33.585310] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.124 [2024-07-16 00:17:33.585328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.124 [2024-07-16 00:17:33.589422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.124 [2024-07-16 00:17:33.598575] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.124 [2024-07-16 00:17:33.599099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.124 [2024-07-16 00:17:33.599136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.124 [2024-07-16 00:17:33.599167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.124 [2024-07-16 00:17:33.599441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.124 [2024-07-16 00:17:33.599713] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.124 [2024-07-16 00:17:33.599735] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.124 [2024-07-16 00:17:33.599753] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.124 [2024-07-16 00:17:33.603791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.124 [2024-07-16 00:17:33.613102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.124 [2024-07-16 00:17:33.613615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.124 [2024-07-16 00:17:33.613653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.124 [2024-07-16 00:17:33.613673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.124 [2024-07-16 00:17:33.613945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.124 [2024-07-16 00:17:33.614228] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.124 [2024-07-16 00:17:33.614251] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.124 [2024-07-16 00:17:33.614268] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.124 [2024-07-16 00:17:33.618309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.124 [2024-07-16 00:17:33.627010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.124 [2024-07-16 00:17:33.627052] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.124 [2024-07-16 00:17:33.627068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.124 [2024-07-16 00:17:33.627090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.124 [2024-07-16 00:17:33.627103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.124 [2024-07-16 00:17:33.627178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.124 [2024-07-16 00:17:33.627261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.124 [2024-07-16 00:17:33.627294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.124 [2024-07-16 00:17:33.627616] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.124 [2024-07-16 00:17:33.628084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.124 [2024-07-16 00:17:33.628117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.124 [2024-07-16 00:17:33.628145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.124 [2024-07-16 00:17:33.628419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.124 [2024-07-16 00:17:33.628690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.124 [2024-07-16 00:17:33.628713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.124 [2024-07-16 00:17:33.628731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.124 [2024-07-16 00:17:33.632802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.383 [2024-07-16 00:17:33.642249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.383 [2024-07-16 00:17:33.642751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.383 [2024-07-16 00:17:33.642787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.383 [2024-07-16 00:17:33.642808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.383 [2024-07-16 00:17:33.643086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.383 [2024-07-16 00:17:33.643368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.383 [2024-07-16 00:17:33.643392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.383 [2024-07-16 00:17:33.643410] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.383 [2024-07-16 00:17:33.647505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.383 [2024-07-16 00:17:33.656700] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.383 [2024-07-16 00:17:33.657216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.383 [2024-07-16 00:17:33.657255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.383 [2024-07-16 00:17:33.657276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.383 [2024-07-16 00:17:33.657552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.383 [2024-07-16 00:17:33.657826] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.383 [2024-07-16 00:17:33.657848] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.383 [2024-07-16 00:17:33.657866] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.383 [2024-07-16 00:17:33.661976] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.383 [2024-07-16 00:17:33.671200] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.383 [2024-07-16 00:17:33.671699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.383 [2024-07-16 00:17:33.671735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.383 [2024-07-16 00:17:33.671755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.383 [2024-07-16 00:17:33.672025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.383 [2024-07-16 00:17:33.672305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.383 [2024-07-16 00:17:33.672328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.672346] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.676390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 [2024-07-16 00:17:33.685780] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.686308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.686346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.686366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.686640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.686913] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.686936] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.686953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.691037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 [2024-07-16 00:17:33.700372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.700875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.700912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.700931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.701214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.701486] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.701509] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.701526] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.705570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 [2024-07-16 00:17:33.714860] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.715236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.715265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.715291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.715556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.715824] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.715846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.715862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.719899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.729421] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.729858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.729902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.729922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.730206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.730476] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.730498] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.730514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.734642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 [2024-07-16 00:17:33.743945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.384 [2024-07-16 00:17:33.744370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.384 [2024-07-16 00:17:33.744413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.744434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.744706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.744976] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.744998] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.745014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.746067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.384 [2024-07-16 00:17:33.749071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.758394] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.758830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.758872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.758892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.759175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.759451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.759473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.759489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.763537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 [2024-07-16 00:17:33.772891] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.773434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.773473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.773493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.773768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.774042] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.774065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.774082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 [2024-07-16 00:17:33.778173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 Malloc0 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.787501] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 [2024-07-16 00:17:33.787971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.384 [2024-07-16 00:17:33.788004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x153c950 with addr=10.0.0.2, port=4420 00:33:59.384 [2024-07-16 00:17:33.788024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c950 is same with the state(5) to be set 00:33:59.384 [2024-07-16 00:17:33.788302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c950 (9): Bad file descriptor 00:33:59.384 [2024-07-16 00:17:33.788573] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:59.384 [2024-07-16 00:17:33.788602] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:59.384 [2024-07-16 00:17:33.788620] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.792661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.384 [2024-07-16 00:17:33.800874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.384 [2024-07-16 00:17:33.801954] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.384 00:17:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1380684 00:33:59.642 [2024-07-16 00:17:33.961688] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.612 00:34:09.612 Latency(us) 00:34:09.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:09.612 Verification LBA range: start 0x0 length 0x4000 00:34:09.612 Nvme1n1 : 15.01 5824.32 22.75 7568.28 0.00 9527.47 652.33 20777.34 00:34:09.612 =================================================================================================================== 00:34:09.612 Total : 5824.32 22.75 7568.28 0.00 9527.47 652.33 20777.34 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:09.612 rmmod nvme_tcp 00:34:09.612 rmmod nvme_fabrics 00:34:09.612 rmmod nvme_keyring 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1381253 ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1381253 ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1381253' 00:34:09.612 killing process with pid 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1381253 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.612 00:17:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.989 00:17:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:10.989 00:34:10.989 real 0m21.725s 00:34:10.989 user 0m57.997s 00:34:10.989 sys 0m4.402s 00:34:10.989 00:17:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:10.989 00:17:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.989 ************************************ 00:34:10.989 END TEST nvmf_bdevperf 00:34:10.989 ************************************ 00:34:10.989 00:17:45 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:10.989 00:17:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:10.989 00:17:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:10.989 00:17:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.248 ************************************ 00:34:11.248 START TEST nvmf_target_disconnect 00:34:11.248 ************************************ 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:11.249 * Looking for test storage... 00:34:11.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:11.249 00:17:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:12.627 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:12.627 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:12.627 Found net devices under 0000:08:00.0: cvl_0_0 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.627 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:12.628 Found net devices under 0000:08:00.1: cvl_0_1 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.628 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:12.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:34:12.894 00:34:12.894 --- 10.0.0.2 ping statistics --- 00:34:12.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.894 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:34:12.894 00:34:12.894 --- 10.0.0.1 ping statistics --- 00:34:12.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.894 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.894 ************************************ 00:34:12.894 START TEST nvmf_target_disconnect_tc1 00:34:12.894 ************************************ 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.894 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:12.895 EAL: No free 2048 kB hugepages reported on node 1 00:34:12.895 [2024-07-16 00:17:47.345026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:12.895 [2024-07-16 00:17:47.345102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11035f0 with addr=10.0.0.2, port=4420 00:34:12.895 [2024-07-16 00:17:47.345154] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:12.895 [2024-07-16 00:17:47.345179] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:12.895 [2024-07-16 00:17:47.345194] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:12.895 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:12.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:12.895 Initializing NVMe Controllers 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:12.895 00:34:12.895 real 0m0.095s 00:34:12.895 user 0m0.037s 00:34:12.895 sys 0m0.057s 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:12.895 ************************************ 00:34:12.895 END TEST nvmf_target_disconnect_tc1 00:34:12.895 ************************************ 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:12.895 ************************************ 00:34:12.895 START TEST nvmf_target_disconnect_tc2 00:34:12.895 ************************************ 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1383523 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1383523 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1383523 ']' 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:12.895 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.199 [2024-07-16 00:17:47.450547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:13.199 [2024-07-16 00:17:47.450643] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:13.199 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.199 [2024-07-16 00:17:47.514060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:13.199 [2024-07-16 00:17:47.602668] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:13.199 [2024-07-16 00:17:47.602721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:13.199 [2024-07-16 00:17:47.602737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:13.199 [2024-07-16 00:17:47.602754] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:13.199 [2024-07-16 00:17:47.602767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:13.199 [2024-07-16 00:17:47.603120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:13.199 [2024-07-16 00:17:47.603192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:13.199 [2024-07-16 00:17:47.603351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:13.199 [2024-07-16 00:17:47.603495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 Malloc0 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 [2024-07-16 00:17:47.770030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 [2024-07-16 00:17:47.798282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1383630 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:13.485 00:17:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:13.485 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.393 00:17:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1383523 00:34:15.393 00:17:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Read completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.393 starting I/O failed 00:34:15.393 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 [2024-07-16 00:17:49.824732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 [2024-07-16 00:17:49.825226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 [2024-07-16 00:17:49.825611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Read completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 Write completed with error (sct=0, sc=8) 00:34:15.394 starting I/O failed 00:34:15.394 [2024-07-16 00:17:49.825948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:15.394 [2024-07-16 00:17:49.826103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.826148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.826348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.826381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.826518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.826550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.826710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.826757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.826872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.826903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.827114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.394 [2024-07-16 00:17:49.827212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.394 qpair failed and we were unable to recover it. 00:34:15.394 [2024-07-16 00:17:49.827432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.827464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.827614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.827671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.827828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.827875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.827998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.828157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.828285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.828455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.828688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.828826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.828853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.829825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.829853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.830898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.830926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.831122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.831160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.831297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.831351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.831543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.831568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.831758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.831806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.831929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.831969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.832155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.832184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.832304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.832335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.832471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.832514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.832646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.832677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.833959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.833984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.834102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.834313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.834550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.834672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.395 [2024-07-16 00:17:49.834844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.395 qpair failed and we were unable to recover it. 00:34:15.395 [2024-07-16 00:17:49.834972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.835932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.835971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.836901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.836927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.837960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.837986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.838964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.838990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.839938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.839965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.840050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.840076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.396 [2024-07-16 00:17:49.840161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.396 [2024-07-16 00:17:49.840188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.396 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.840905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.840990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.841965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.841993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.842908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.842984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.843865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.843972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.844914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.844941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.845046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.845106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.845214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.845243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.845334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.845362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.397 qpair failed and we were unable to recover it. 00:34:15.397 [2024-07-16 00:17:49.845477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.397 [2024-07-16 00:17:49.845508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.845605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.845631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.845742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.845768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.845878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.845904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.846954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.846981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.847948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.847991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.848937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.848986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.849934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.849962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.850047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.850073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.398 qpair failed and we were unable to recover it. 00:34:15.398 [2024-07-16 00:17:49.850156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.398 [2024-07-16 00:17:49.850183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.850924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.850955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.851891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.851944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.852089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.852153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.852268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.852321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.852510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.852560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.852684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.852741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.852892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.852917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.853910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.853940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.854824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.854962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.855809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.855976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.856002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.856169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.399 [2024-07-16 00:17:49.856196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.399 qpair failed and we were unable to recover it. 00:34:15.399 [2024-07-16 00:17:49.856284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.856385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.856410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.856526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.856580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.856678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.856740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.856905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.856957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.857881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.857909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.858897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.858954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.859895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.859984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.860933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.860975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.861922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.861978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.862082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.862109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.862208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.862235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.400 [2024-07-16 00:17:49.862325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.400 [2024-07-16 00:17:49.862352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.400 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.862452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.862494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.862632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.862713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.862802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.862828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.862915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.862940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.863845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.863969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.864877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.864905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.865901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.865927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.866934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.866962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.401 [2024-07-16 00:17:49.867627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.401 [2024-07-16 00:17:49.867654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.401 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.867736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.867763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.867857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.867883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.867981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.868874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.868988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.869881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.869907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.870833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.870957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.871882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.871910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.872061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.872211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.872364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.872492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.402 [2024-07-16 00:17:49.872715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.402 qpair failed and we were unable to recover it. 00:34:15.402 [2024-07-16 00:17:49.872825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.872875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.873887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.873933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.874964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.874991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.875904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.875982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.876890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.876916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.877912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.877963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.878046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.878073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.878171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.403 [2024-07-16 00:17:49.878199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.403 qpair failed and we were unable to recover it. 00:34:15.403 [2024-07-16 00:17:49.878296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.878435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.878544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.878647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.878786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.878894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.878920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.879970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.879997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.880940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.880967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.881925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.881954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.882769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.882821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.883849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.883893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.884045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.884072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.884166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.884193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.884318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.404 [2024-07-16 00:17:49.884367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.404 qpair failed and we were unable to recover it. 00:34:15.404 [2024-07-16 00:17:49.884523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.884562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.884654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.884680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.884818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.884844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.884924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.884952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.885932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.885960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.886944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.886991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.887894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.887944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.888939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.888965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.405 qpair failed and we were unable to recover it. 00:34:15.405 [2024-07-16 00:17:49.889919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.405 [2024-07-16 00:17:49.889946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.890910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.890959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.891873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.891974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.892909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.892959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.893883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.893975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.894002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.894114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.894149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.894237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.894266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.406 [2024-07-16 00:17:49.894384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.406 [2024-07-16 00:17:49.894413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.406 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.894500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.894526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.894641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.894688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.894791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.894841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.894930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.894957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.895927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.895953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.896098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.896268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.896447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.896622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.896804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.896993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.897916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.897941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.898970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.898995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.899940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.899966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.900049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.407 [2024-07-16 00:17:49.900077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.407 qpair failed and we were unable to recover it. 00:34:15.407 [2024-07-16 00:17:49.900189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.900220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.900312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.900338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.900465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.900505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.900664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.900718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.900867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.900920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.901928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.901973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.902916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.902942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.903165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.903219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.903449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.903476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.903600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.903652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.903766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.681 [2024-07-16 00:17:49.903829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.681 qpair failed and we were unable to recover it. 00:34:15.681 [2024-07-16 00:17:49.903932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.903958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.904902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.904942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.905931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.905957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.906890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.906942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.907871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.907896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.908922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.908971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.909061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.909088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.909190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.909218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.909315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.909351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.682 [2024-07-16 00:17:49.909473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.682 [2024-07-16 00:17:49.909522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.682 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.909627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.909653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.909771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.909815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.909912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.909938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.910871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.910931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.911872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.911994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.912945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.912974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.913914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.913941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.683 [2024-07-16 00:17:49.914616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.683 [2024-07-16 00:17:49.914643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.683 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.914741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.914766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.914854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.914883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.914968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.914994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.915906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.915935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.916906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.916998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.917917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.917997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.918896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.918920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.919030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.919150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.919264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.919392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.684 [2024-07-16 00:17:49.919520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.684 qpair failed and we were unable to recover it. 00:34:15.684 [2024-07-16 00:17:49.919607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.919635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.919720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.919746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.919825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.919851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.919947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.919982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.920869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.920981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.921951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.921993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.922883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.922916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.685 [2024-07-16 00:17:49.923011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.685 [2024-07-16 00:17:49.923037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.685 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.923893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.923994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.924908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.924934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.925892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.925923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.926898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.926976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.927934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.927962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.928071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.928117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.928208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.686 [2024-07-16 00:17:49.928237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.686 qpair failed and we were unable to recover it. 00:34:15.686 [2024-07-16 00:17:49.928337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.928383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.928496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.928541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.928619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.928644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.928747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.928794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.928897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.928922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.929891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.929919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.930947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.930972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.931895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.931941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.932873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.932972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.933017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.933095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.933120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.687 [2024-07-16 00:17:49.933228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.687 [2024-07-16 00:17:49.933254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.687 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.933351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.933385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.933483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.933509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.933628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.933655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.933742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.933768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.933872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.933919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.934915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.934941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.935963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.935989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.936907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.936940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.688 [2024-07-16 00:17:49.937896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.688 qpair failed and we were unable to recover it. 00:34:15.688 [2024-07-16 00:17:49.937976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.938909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.938972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.939873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.939976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.940898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.940976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.941899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.941984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.942010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.942098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.942123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.942261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.942293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.942386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.942412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.942514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.689 [2024-07-16 00:17:49.942546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.689 qpair failed and we were unable to recover it. 00:34:15.689 [2024-07-16 00:17:49.942651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.942695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.942802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.942846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.942929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.942954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.943899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.943995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.944927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.944954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.945925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.945951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.690 qpair failed and we were unable to recover it. 00:34:15.690 [2024-07-16 00:17:49.946800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.690 [2024-07-16 00:17:49.946826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.946902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.946928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.947912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.947938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.948934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.948961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.949948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.949974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.950957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.950985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.951084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.951110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.951198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.951225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.951311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.951337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.951468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.691 [2024-07-16 00:17:49.951520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.691 qpair failed and we were unable to recover it. 00:34:15.691 [2024-07-16 00:17:49.951621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.951648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.951753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.951797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.951891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.951930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.952909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.952934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.953926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.953952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.954895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.954941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.955939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.955964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.956044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.956070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.956169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.956196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.692 qpair failed and we were unable to recover it. 00:34:15.692 [2024-07-16 00:17:49.956287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.692 [2024-07-16 00:17:49.956314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.956935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.956964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.957905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.957930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.958855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.958886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.959908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.959934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.960943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.960968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.693 [2024-07-16 00:17:49.961045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.693 [2024-07-16 00:17:49.961071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.693 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.961946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.961973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.962897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.962988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.963919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.963945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.964920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.964946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.965030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.694 [2024-07-16 00:17:49.965056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.694 qpair failed and we were unable to recover it. 00:34:15.694 [2024-07-16 00:17:49.965136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.965941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.965967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.966954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.966981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.967896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.967922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.968932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.968963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.695 [2024-07-16 00:17:49.969757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.695 qpair failed and we were unable to recover it. 00:34:15.695 [2024-07-16 00:17:49.969837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.969866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.969957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.969984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.970862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.970888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.971936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.971964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.972911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.972938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.973934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.973961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.974053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.974080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.974170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.974197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.974280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.974306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.974419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.974451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.696 [2024-07-16 00:17:49.974572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.696 [2024-07-16 00:17:49.974614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.696 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.974714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.974755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.974839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.974865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.974956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.974982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.975876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.975920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.976943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.976969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.977942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.977967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.978863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.978974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.979099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.979272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.979389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.979516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.697 [2024-07-16 00:17:49.979642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.697 [2024-07-16 00:17:49.979670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.697 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.979819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.979848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.979931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.979957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.980875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.980989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.981771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.981934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.982962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.982988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.983877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.983903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.698 [2024-07-16 00:17:49.984820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.698 qpair failed and we were unable to recover it. 00:34:15.698 [2024-07-16 00:17:49.984936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.984977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.985900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.985926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.986912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.986939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.987959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.987989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.988888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.988983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.989099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.989244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.989377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.989528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.699 [2024-07-16 00:17:49.989657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.699 [2024-07-16 00:17:49.989685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.699 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.989767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.989794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.989888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.989918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.990883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.990986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.991902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.991932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.992900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.992926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.993958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.993989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.994174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.994280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.994404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.994517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.700 [2024-07-16 00:17:49.994642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.700 qpair failed and we were unable to recover it. 00:34:15.700 [2024-07-16 00:17:49.994747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.994788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.994919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.994945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.995945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.995971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.996874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.996915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.997872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.997988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.998963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.998988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.701 [2024-07-16 00:17:49.999945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.701 [2024-07-16 00:17:49.999974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.701 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.000893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.000921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.001915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.001942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.002897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.002995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.003927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.003954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.004973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.004998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.005087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.005113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.702 [2024-07-16 00:17:50.005211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.702 [2024-07-16 00:17:50.005238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.702 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.005925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.005960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.006894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.006979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.007940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.007967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.008904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.008992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.009019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.009098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.009125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.009208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.009240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.009320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.009346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.009434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.703 [2024-07-16 00:17:50.009460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.703 qpair failed and we were unable to recover it. 00:34:15.703 [2024-07-16 00:17:50.009551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.009578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.009679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.009706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.009781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.009808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.009898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.009924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.010900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.010984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.011906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.011933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.012901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.012929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.013889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.013976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.014002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.014096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.014125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.014250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.014278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.014362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.014389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.014474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.704 [2024-07-16 00:17:50.014501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.704 qpair failed and we were unable to recover it. 00:34:15.704 [2024-07-16 00:17:50.014587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.014614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.014719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.014749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.014842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.014870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.014966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.014993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.015890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.015920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.016896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.016997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.017959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.017985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.018951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.018978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.019065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.019091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.019183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.019210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.019303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.019331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.019412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.705 [2024-07-16 00:17:50.019438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.705 qpair failed and we were unable to recover it. 00:34:15.705 [2024-07-16 00:17:50.019537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.019565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.019657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.019684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.019789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.019827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.019930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.019958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.020908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.020937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.021902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.021981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.022956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.022983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.023952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.023983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.024089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.024125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.024240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.024268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.024355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.706 [2024-07-16 00:17:50.024382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.706 qpair failed and we were unable to recover it. 00:34:15.706 [2024-07-16 00:17:50.024463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.024489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.024584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.024612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.024694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.024720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.024809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.024837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.024933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.024959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.025907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.025935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.026889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.026915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.027939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.027967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.707 [2024-07-16 00:17:50.028051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.707 [2024-07-16 00:17:50.028077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.707 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.028968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.028996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.029078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.029104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.029203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.029238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.029323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.029351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.029448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.029475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.032891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.032919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.033953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.033982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.034921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.034949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.708 qpair failed and we were unable to recover it. 00:34:15.708 [2024-07-16 00:17:50.035620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.708 [2024-07-16 00:17:50.035655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.035742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.035771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.035875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.035904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.036883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.036940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.037943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.037972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.038968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.038998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.039907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.039991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.709 qpair failed and we were unable to recover it. 00:34:15.709 [2024-07-16 00:17:50.040824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.709 [2024-07-16 00:17:50.040851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.040939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.040967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.041942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.041984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.042941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.042966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.043907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.043998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.044886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.044915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.710 [2024-07-16 00:17:50.045845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.710 qpair failed and we were unable to recover it. 00:34:15.710 [2024-07-16 00:17:50.045932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.045960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.046907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.046987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.047923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.047951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.048917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.048947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.049900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.049927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.050013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.050039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.050126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.050158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.711 [2024-07-16 00:17:50.050245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.711 [2024-07-16 00:17:50.050271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.711 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.050354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.050380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.050461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.050490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.050621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.050742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.050786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.050892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.050920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.051871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.051987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.052911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.052994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.053932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.053960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.054064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.054093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.054182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.712 [2024-07-16 00:17:50.054208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.712 qpair failed and we were unable to recover it. 00:34:15.712 [2024-07-16 00:17:50.054287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.054437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.054557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.054682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.054786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.054904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.054933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.055917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.055946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.056962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.056989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.057916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.057943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.058909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.058936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.059013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.059038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.059130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.059166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.059254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.059280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.713 qpair failed and we were unable to recover it. 00:34:15.713 [2024-07-16 00:17:50.059372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.713 [2024-07-16 00:17:50.059398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.059489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.059516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.059597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.059623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.059717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.059757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.059850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.059883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.059973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.060932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.060962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.061880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.061920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.062939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.062969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.063921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.063947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.064058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.064173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.064291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.064400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.714 [2024-07-16 00:17:50.064505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.714 qpair failed and we were unable to recover it. 00:34:15.714 [2024-07-16 00:17:50.064584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.064612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.064707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.064747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.064843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.064871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.064959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.064984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.065939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.065965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.066966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.066993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.067885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.067913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.068971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.068998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.069106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.069133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.069238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.069265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.069346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.715 [2024-07-16 00:17:50.069373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.715 qpair failed and we were unable to recover it. 00:34:15.715 [2024-07-16 00:17:50.069457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.069482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.069568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.069594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.069681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.069709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.069801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.069841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.069938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.069966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.070911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.070939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.071904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.071930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.072902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.072929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.716 [2024-07-16 00:17:50.073595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.716 qpair failed and we were unable to recover it. 00:34:15.716 [2024-07-16 00:17:50.073696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.073723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.073820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.073848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.073936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.073965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.074971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.074995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.075890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.075976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.076905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.076931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.077896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.077923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.717 [2024-07-16 00:17:50.078552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.717 [2024-07-16 00:17:50.078577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.717 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.078678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.078705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.078786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.078813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.078899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.078925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.079951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.079977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5320 is same with the state(5) to be set 00:34:15.718 [2024-07-16 00:17:50.080306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.080941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.080968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.081905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.081987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.082924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.082952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.083035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.083061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.083156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.083184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.083288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.718 [2024-07-16 00:17:50.083316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.718 qpair failed and we were unable to recover it. 00:34:15.718 [2024-07-16 00:17:50.083408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.083438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.083528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.083555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.083638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.083664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.083745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.083771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.083861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.083889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.083975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.084918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.084946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.085877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.085989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.086852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.086881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.087948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.087975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.088057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.088083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.088178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.088210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.088310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.719 [2024-07-16 00:17:50.088337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.719 qpair failed and we were unable to recover it. 00:34:15.719 [2024-07-16 00:17:50.088439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.088466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.088561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.088588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.088667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.088693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.088772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.088798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.088887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.088915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.089933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.089961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.090927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.090953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.091959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.091984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.092073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.092100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.092204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.092232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.720 [2024-07-16 00:17:50.092315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.720 [2024-07-16 00:17:50.092341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.720 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.092431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.092459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.092548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.092578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.092670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.092695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.092773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.092799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.092896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.092923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.093887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.093915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.094900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.094925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.095914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.095940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.721 [2024-07-16 00:17:50.096920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.721 [2024-07-16 00:17:50.096948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.721 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.097899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.097987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.098913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.098996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.099897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.099922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.100897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.100923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.101034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.101158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.101276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.101380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.722 [2024-07-16 00:17:50.101482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.722 qpair failed and we were unable to recover it. 00:34:15.722 [2024-07-16 00:17:50.101566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.101591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.101670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.101698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.101786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.101815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.101902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.101930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.102905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.102989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.103912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.103938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.104932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.104959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.105898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.105996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.106036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.106128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.106163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.106244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.723 [2024-07-16 00:17:50.106270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.723 qpair failed and we were unable to recover it. 00:34:15.723 [2024-07-16 00:17:50.106351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.106477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.106598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.106719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.106844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.106953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.106979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.107910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.107997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.108944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.108971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.109959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.109984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.110887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.110982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.111009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.111095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.111123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.111222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.724 [2024-07-16 00:17:50.111253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.724 qpair failed and we were unable to recover it. 00:34:15.724 [2024-07-16 00:17:50.111339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.111495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.111601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.111710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.111822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.111939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.111970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.112881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.112910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.113925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.113951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.114027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.114054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.114149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.114175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.114254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.114280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.725 qpair failed and we were unable to recover it. 00:34:15.725 [2024-07-16 00:17:50.114367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.725 [2024-07-16 00:17:50.114396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.114482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.114508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.114590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.114615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.114694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.114721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.114806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.114831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.114913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.114938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.115916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.115992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.116941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.116970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.117932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.117958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.118861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.118913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.726 [2024-07-16 00:17:50.119615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.726 [2024-07-16 00:17:50.119641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.726 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.119781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.119841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.119928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.119953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.120917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.120943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.121892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.121986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.122907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.122934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.123893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.123983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.124896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.727 [2024-07-16 00:17:50.124922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.727 qpair failed and we were unable to recover it. 00:34:15.727 [2024-07-16 00:17:50.125010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.125870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.125969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.126950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.126977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.127894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.127996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.128909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.128938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.129959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.129988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.728 [2024-07-16 00:17:50.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.728 [2024-07-16 00:17:50.130094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.728 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.130974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.130999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.131882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.131911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.132924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.132951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.133920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.133945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.134030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.134057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.134147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.134175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.134265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.134293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.134387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.729 [2024-07-16 00:17:50.134413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.729 qpair failed and we were unable to recover it. 00:34:15.729 [2024-07-16 00:17:50.134492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.134518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.134600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.134626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.134707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.134733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.134820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.134847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.134927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.134959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.135943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.135970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.136944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.136972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.137970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.137999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.138905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.138931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.139008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.139035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.139123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.139156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.139245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.139274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.139357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.139385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.730 [2024-07-16 00:17:50.139477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-16 00:17:50.139503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.730 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.139594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.139620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.139707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.139732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.139811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.139836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.139914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.139939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.140893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.140978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.141899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.141987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.142899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.142992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.143914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.143999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.144028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.144124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.144161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.144250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.144278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.144370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.144397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.144482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-16 00:17:50.144510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.731 qpair failed and we were unable to recover it. 00:34:15.731 [2024-07-16 00:17:50.144597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.144624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.144710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.144736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.144814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.144840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.144922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.144947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.145967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.145993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.146849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.146977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.147901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.147926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.148892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.148920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-16 00:17:50.149594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.732 qpair failed and we were unable to recover it. 00:34:15.732 [2024-07-16 00:17:50.149678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.149703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.149811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.149837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.149946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.149972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.150970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.150996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.151954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.151980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.152959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.152988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.733 [2024-07-16 00:17:50.153953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.733 [2024-07-16 00:17:50.153980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.733 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.154952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.154977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.155932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.155958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.156902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.156927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.157907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.157992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.734 [2024-07-16 00:17:50.158778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.734 [2024-07-16 00:17:50.158805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.734 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.158889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.158915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.158992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.159912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.159994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.160896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.160924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.161893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.161919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.162913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.162989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.735 [2024-07-16 00:17:50.163681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.735 qpair failed and we were unable to recover it. 00:34:15.735 [2024-07-16 00:17:50.163768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.163794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.163877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.163904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.163982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.164900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.164984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.165900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.165983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.166906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.166988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.167917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.167998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.736 qpair failed and we were unable to recover it. 00:34:15.736 [2024-07-16 00:17:50.168660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.736 [2024-07-16 00:17:50.168688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.168769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.168794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.168881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.168909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.168989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.169895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.169982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.170968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.170993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.171946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.171973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.737 [2024-07-16 00:17:50.172967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.737 [2024-07-16 00:17:50.172995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.737 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.173902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.173929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.174889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.174918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.175915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.175941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.176926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.176953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.177965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.738 [2024-07-16 00:17:50.177995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.738 qpair failed and we were unable to recover it. 00:34:15.738 [2024-07-16 00:17:50.178085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.178916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.178941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:15.739 [2024-07-16 00:17:50.179717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.739 [2024-07-16 00:17:50.179754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:15.739 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.179859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.179887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.179980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.180925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.180952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.181950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.181977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.182899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.182983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.183009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.183099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.019 [2024-07-16 00:17:50.183126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.019 qpair failed and we were unable to recover it. 00:34:16.019 [2024-07-16 00:17:50.183223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.183904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.183929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.184932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.184964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.185896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.185988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.186907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.186932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.020 [2024-07-16 00:17:50.187766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.020 [2024-07-16 00:17:50.187796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.020 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.187878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.187904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.187994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.188893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.188974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.189907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.189987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.190894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.190979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.191938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.191966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.192066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.192094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.192183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.021 [2024-07-16 00:17:50.192209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.021 qpair failed and we were unable to recover it. 00:34:16.021 [2024-07-16 00:17:50.192286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.192944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.192972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.193964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.193989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.194964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.194990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.195922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.195948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.022 [2024-07-16 00:17:50.196610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.022 qpair failed and we were unable to recover it. 00:34:16.022 [2024-07-16 00:17:50.196698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.196724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.196809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.196836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.196928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.196955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.197919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.197945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.198908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.198991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.199918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.199946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.200027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.200054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.200132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.200164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.200250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.200277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.200359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.023 [2024-07-16 00:17:50.200387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.023 qpair failed and we were unable to recover it. 00:34:16.023 [2024-07-16 00:17:50.200469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.200496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.200585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.200612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.200697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.200722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.200809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.200836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.200926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.200953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.201942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.201967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.202939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.202966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.203947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.203973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.024 [2024-07-16 00:17:50.204928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.024 [2024-07-16 00:17:50.204955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.024 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.205962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.205987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.206958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.206985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.207948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.207974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.208889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.208973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.209086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.209200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.209313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.209422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.025 [2024-07-16 00:17:50.209553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.025 [2024-07-16 00:17:50.209579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.025 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.209657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.209683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.209771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.209797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.209883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.209910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.209987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.210971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.210998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.211962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.211988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.212958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.212985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.213958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.213985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.026 [2024-07-16 00:17:50.214065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.026 [2024-07-16 00:17:50.214091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.026 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.214935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.214962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.215935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.215964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.216937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.216964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.217906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.027 [2024-07-16 00:17:50.217939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.027 qpair failed and we were unable to recover it. 00:34:16.027 [2024-07-16 00:17:50.218016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.218910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.218936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.219921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.219998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.220896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.220922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.221901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.221978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.222003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.222080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.222106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.222208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.222235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.222324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.222351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.028 qpair failed and we were unable to recover it. 00:34:16.028 [2024-07-16 00:17:50.222437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.028 [2024-07-16 00:17:50.222465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.222561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.222588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.222682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.222708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.222799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.222826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.222910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.222938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.226942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.226978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.227906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.227932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.228929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.228959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.029 qpair failed and we were unable to recover it. 00:34:16.029 [2024-07-16 00:17:50.229850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.029 [2024-07-16 00:17:50.229875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.229960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.229986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.230943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.230969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.231927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.231953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.232955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.232981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.233912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.233938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.234021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.234048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.234149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.234189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.234286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.030 [2024-07-16 00:17:50.234314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.030 qpair failed and we were unable to recover it. 00:34:16.030 [2024-07-16 00:17:50.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.234425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.234511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.234537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.234613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.234638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.234729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.234758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.234891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.234918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.234996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.235905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.235933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.236883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.236972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.237898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.237996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.031 [2024-07-16 00:17:50.238899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.031 [2024-07-16 00:17:50.238925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.031 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.239938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.239965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.240783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.240960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.241964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.241989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.242820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.242985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.243039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.243125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.243158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.032 [2024-07-16 00:17:50.243274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.032 [2024-07-16 00:17:50.243300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.032 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.243409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.243472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.243555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.243581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.243665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.243691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.243814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.243881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.244940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.244968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.245946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.245974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.246941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.246971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.247869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.247910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.033 qpair failed and we were unable to recover it. 00:34:16.033 [2024-07-16 00:17:50.248567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.033 [2024-07-16 00:17:50.248594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.248690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.248718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.248826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.248855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.248948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.248976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.249926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.249954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.250963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.250989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.251952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.251978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.252956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.252982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.253071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.253098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.034 qpair failed and we were unable to recover it. 00:34:16.034 [2024-07-16 00:17:50.253185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.034 [2024-07-16 00:17:50.253213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.253886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.253974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.254918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.254944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.255949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.255974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.256873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.256899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.257096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.257122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.257228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.257256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.257346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.257372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.257802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.257831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.257918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.257946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.035 qpair failed and we were unable to recover it. 00:34:16.035 [2024-07-16 00:17:50.258609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.035 [2024-07-16 00:17:50.258635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.258713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.258739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.258821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.258847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.258930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.258956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.259936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.259963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.260901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.260936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.261943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.261971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.036 [2024-07-16 00:17:50.262857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.036 qpair failed and we were unable to recover it. 00:34:16.036 [2024-07-16 00:17:50.262945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.262973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.263904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.263935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.264919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.264999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.265930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.265959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.266935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.266962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.037 [2024-07-16 00:17:50.267751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.037 qpair failed and we were unable to recover it. 00:34:16.037 [2024-07-16 00:17:50.267840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.267870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.267952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.267978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.268961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.268989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.269814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.269964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.270884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.270977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.271937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.271962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.038 [2024-07-16 00:17:50.272918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.038 [2024-07-16 00:17:50.272947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.038 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.273922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.273948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.274948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.274976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.275951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.275977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.276836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.276863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.277061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.277089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.277171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.277198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.277279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.277304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.039 [2024-07-16 00:17:50.277386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.039 [2024-07-16 00:17:50.277413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.039 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.277495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.277522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.277604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.277640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.277721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.277747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.277832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.277858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.277938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.277964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.278940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.278965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.279907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.279932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.280948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.280974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.040 [2024-07-16 00:17:50.281972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.040 [2024-07-16 00:17:50.281999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.040 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.282969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.282994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.283971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.283996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.284908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.284934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.041 [2024-07-16 00:17:50.285830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.041 [2024-07-16 00:17:50.285856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.041 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.285950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.285976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.286891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.286977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.287886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.287915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.288897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.288925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.289897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.289923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.290008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.290035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.290122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.290157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.290243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.290270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.290355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.042 [2024-07-16 00:17:50.290381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.042 qpair failed and we were unable to recover it. 00:34:16.042 [2024-07-16 00:17:50.290465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.290492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.290580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.290609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.290689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.290716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.290803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.290829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.290920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.290947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.291923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.291949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.292939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.292967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.293921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.293947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.043 [2024-07-16 00:17:50.294900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.043 [2024-07-16 00:17:50.294926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.043 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.295895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.295920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.296892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.296918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.297896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.297924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.298952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.298977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.299053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.299077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.299161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.299188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.299272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.299299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.044 [2024-07-16 00:17:50.299385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.044 [2024-07-16 00:17:50.299411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.044 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.299492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.299518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.299606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.299631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.299716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.299741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.299822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.299846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.299922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.299946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.300945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.300973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.301894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.301983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.302935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.302961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.303042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.045 [2024-07-16 00:17:50.303068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.045 qpair failed and we were unable to recover it. 00:34:16.045 [2024-07-16 00:17:50.303151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.303941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.303967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.304941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.304967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.305893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.305919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.306924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.306951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.307057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.307172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.307274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.307379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.046 [2024-07-16 00:17:50.307484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.046 qpair failed and we were unable to recover it. 00:34:16.046 [2024-07-16 00:17:50.307565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.307591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.307679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.307707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.307790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.307819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.307908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.307935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.308927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.308954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.309899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.309976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.310946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.310974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.311940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.311966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.312054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.312081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.047 [2024-07-16 00:17:50.312162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.047 [2024-07-16 00:17:50.312189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.047 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.312935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.312962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.313924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.313949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.314906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.314933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.315894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.315922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.048 [2024-07-16 00:17:50.316557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.048 [2024-07-16 00:17:50.316585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.048 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.316676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.316707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.316792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.316818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.316900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.316927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.317917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.317999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.318926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.318951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.319912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.319939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.320015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.320041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.320122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.320157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.049 qpair failed and we were unable to recover it. 00:34:16.049 [2024-07-16 00:17:50.320242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.049 [2024-07-16 00:17:50.320267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.320921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.320949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.321936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.321963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.322918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.322945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.323918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.323946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.050 [2024-07-16 00:17:50.324720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.050 qpair failed and we were unable to recover it. 00:34:16.050 [2024-07-16 00:17:50.324795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.324820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.324903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.324931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.325955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.325981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.326934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.326960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.327941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.327967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.051 [2024-07-16 00:17:50.328913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.051 [2024-07-16 00:17:50.328940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.051 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.329914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.329942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.330913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.330938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.331944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.331971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.332898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.332978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.333004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.333096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.333124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.333215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.333243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.333332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.333360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.052 [2024-07-16 00:17:50.333450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.052 [2024-07-16 00:17:50.333477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.052 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.333557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.333583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.333665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.333693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.333781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.333807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.333892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.333919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.333998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.334898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.334925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.335902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.335928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.336909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.336938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.053 [2024-07-16 00:17:50.337624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.053 qpair failed and we were unable to recover it. 00:34:16.053 [2024-07-16 00:17:50.337710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.337737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.337822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.337849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.337932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.337960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.338925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.338952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.339943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.339969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.340912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.340937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.054 qpair failed and we were unable to recover it. 00:34:16.054 [2024-07-16 00:17:50.341604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.054 [2024-07-16 00:17:50.341629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.341717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.341744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.341826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.341853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.341937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.341964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.342915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.342940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.343930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.343956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.344930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.344957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.345905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.055 [2024-07-16 00:17:50.345931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.055 qpair failed and we were unable to recover it. 00:34:16.055 [2024-07-16 00:17:50.346011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.346898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.346980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.347896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.347975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.348916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.348942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.349936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.349963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.350048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.350074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.350160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.350187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.350278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.056 [2024-07-16 00:17:50.350306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.056 qpair failed and we were unable to recover it. 00:34:16.056 [2024-07-16 00:17:50.350384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.350492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.350601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.350707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.350809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.350917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.350943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.351913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.351940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.352920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.352945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.353897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.353924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.057 [2024-07-16 00:17:50.354696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.057 qpair failed and we were unable to recover it. 00:34:16.057 [2024-07-16 00:17:50.354781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.354808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.354889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.354915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.354996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.355907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.355933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.356954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.356987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.357932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.357960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.358039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.358066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.358148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.358177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.358267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.358294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.358371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.358397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.058 [2024-07-16 00:17:50.358489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.058 [2024-07-16 00:17:50.358515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.058 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.358593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.358618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.358713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.358741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.358833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.358861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.358939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.358965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.359940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.359968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.360949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.360975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.361911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.361937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.059 [2024-07-16 00:17:50.362915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.059 [2024-07-16 00:17:50.362942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.059 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.363901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.363927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.364926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.364951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.365904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.365929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.366924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.366951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.367033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.367059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.367149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.367178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.060 [2024-07-16 00:17:50.367259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.060 [2024-07-16 00:17:50.367285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.060 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.367931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.367955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.368924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.368951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.369913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.369938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.370895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.370976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.061 [2024-07-16 00:17:50.371642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.061 qpair failed and we were unable to recover it. 00:34:16.061 [2024-07-16 00:17:50.371721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.371746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.371834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.371861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.371940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.371965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.372913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.372939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.373907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.373935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.374888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.374914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.375914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.375940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.376022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.376049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.376133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.376163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.376245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.062 [2024-07-16 00:17:50.376270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.062 qpair failed and we were unable to recover it. 00:34:16.062 [2024-07-16 00:17:50.376355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.376459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.376572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.376684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.376798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.376910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.376936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.377913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.377938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.378928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.378955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.379033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.379060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.379136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.379168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.379251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.379277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.063 qpair failed and we were unable to recover it. 00:34:16.063 [2024-07-16 00:17:50.379365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.063 [2024-07-16 00:17:50.379394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.379477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.379505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.379589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.379615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.379697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.379723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.379799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.379824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.379901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.379928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.380917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.380942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.381904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.381929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.382928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.382954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.064 [2024-07-16 00:17:50.383840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.064 qpair failed and we were unable to recover it. 00:34:16.064 [2024-07-16 00:17:50.383921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.383947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.384913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.384938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.385905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.385930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.386912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.386992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.387902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.387928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.388016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.388045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.388128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.388163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.388240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.388266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.388358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.065 [2024-07-16 00:17:50.388386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.065 qpair failed and we were unable to recover it. 00:34:16.065 [2024-07-16 00:17:50.388475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.388502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.388584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.388610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.388701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.388729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.388814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.388841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.388922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.388949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.389866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.389917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.390905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.390986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.391905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.391985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.392924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.392950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.066 qpair failed and we were unable to recover it. 00:34:16.066 [2024-07-16 00:17:50.393028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.066 [2024-07-16 00:17:50.393054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.393899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.393995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.394923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.394951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.395915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.395940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.067 [2024-07-16 00:17:50.396825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.067 [2024-07-16 00:17:50.396850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.067 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.396931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.396957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.397927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.397953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.398927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.398953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.399906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.399933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.400869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.400895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.068 qpair failed and we were unable to recover it. 00:34:16.068 [2024-07-16 00:17:50.401662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.068 [2024-07-16 00:17:50.401691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.401770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.401796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.401886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.401914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.401997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.402968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.402995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.403889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.403979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.404907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.404962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.405876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.405902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.069 qpair failed and we were unable to recover it. 00:34:16.069 [2024-07-16 00:17:50.406952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.069 [2024-07-16 00:17:50.406980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.407905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.407945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.408971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.408996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.409894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.409922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.410914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.410941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.070 [2024-07-16 00:17:50.411739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.070 qpair failed and we were unable to recover it. 00:34:16.070 [2024-07-16 00:17:50.411818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.411844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.411925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.411950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.412890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.412915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.413933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.413959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.414921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.414953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.071 [2024-07-16 00:17:50.415820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.071 [2024-07-16 00:17:50.415847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.071 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.415938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.415965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.416936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.416962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.417868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.417970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.418893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.418921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.419901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.419981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.420009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.420090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.420116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.420210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.420238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.420323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.420349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.072 [2024-07-16 00:17:50.420445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.072 [2024-07-16 00:17:50.420472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.072 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.420562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.420589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.420676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.420704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.420785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.420811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.420892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.420918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.421927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.421952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.422959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.422984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.423942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.423974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.424886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.424914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.425061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.425115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.073 qpair failed and we were unable to recover it. 00:34:16.073 [2024-07-16 00:17:50.425273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.073 [2024-07-16 00:17:50.425323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.425957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.425985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.426960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.426987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.427894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.427978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.428858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.428909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.429971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.429997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.074 qpair failed and we were unable to recover it. 00:34:16.074 [2024-07-16 00:17:50.430085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.074 [2024-07-16 00:17:50.430112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.430963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.430989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.431869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.431899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.432961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.432990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.433888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.433914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.075 [2024-07-16 00:17:50.434763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.075 qpair failed and we were unable to recover it. 00:34:16.075 [2024-07-16 00:17:50.434846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.434873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.434983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.435953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.435981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.436867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.436893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.437877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.437995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.438938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.438964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.439079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.439105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.439199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.076 [2024-07-16 00:17:50.439225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.076 qpair failed and we were unable to recover it. 00:34:16.076 [2024-07-16 00:17:50.439307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.439449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.439562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.439670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.439773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.439895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.439921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.440914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.440941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.441924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.441949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.442947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.442974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.443968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.077 [2024-07-16 00:17:50.443993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.077 qpair failed and we were unable to recover it. 00:34:16.077 [2024-07-16 00:17:50.444091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.444905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.444989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.445878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.445907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.446909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.446935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.447895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.447951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.078 [2024-07-16 00:17:50.448695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.078 [2024-07-16 00:17:50.448721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.078 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.448804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.448829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.448920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.448948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.449972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.449998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.450913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.450943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.451932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.451959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.452960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.452988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.453079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.453106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.453198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.453225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.453310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.453337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.079 [2024-07-16 00:17:50.453427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.079 [2024-07-16 00:17:50.453456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.079 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.453538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.453565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.453648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.453676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.453756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.453781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.453861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.453887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.453983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.454890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.454918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.455895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.455921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.456937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.456965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.457049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.457075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.457162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.457190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.457273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.457299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.457379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.080 [2024-07-16 00:17:50.457405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.080 qpair failed and we were unable to recover it. 00:34:16.080 [2024-07-16 00:17:50.457488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.457515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.457596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.457621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.457700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.457725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.457806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.457832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.457917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.457944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.458925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.458954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.459888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.459986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.460902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.460982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.461897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.461924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.462025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.462050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.081 [2024-07-16 00:17:50.462146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.081 [2024-07-16 00:17:50.462176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.081 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.462884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.462979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.463902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.463992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.464905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.464985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.465911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.465939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.082 [2024-07-16 00:17:50.466914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.082 [2024-07-16 00:17:50.466940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.082 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.467961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.467990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.468896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.468979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.469910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.469935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.470967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.470993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.083 [2024-07-16 00:17:50.471688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.083 [2024-07-16 00:17:50.471715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.083 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.471803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.471830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.471915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.471942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.472957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.472983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.473960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.473986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.474958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.474987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.475888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.084 qpair failed and we were unable to recover it. 00:34:16.084 [2024-07-16 00:17:50.475976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.084 [2024-07-16 00:17:50.476004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.476964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.476989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.477918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.477999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.478895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.478924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.479868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.479900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.085 [2024-07-16 00:17:50.480729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.085 qpair failed and we were unable to recover it. 00:34:16.085 [2024-07-16 00:17:50.480817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.480846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.480962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.480988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.481891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.481919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.482925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.482951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.483967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.483993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.484912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.484939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.485078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.485204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.485310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.485421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.086 [2024-07-16 00:17:50.485527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.086 qpair failed and we were unable to recover it. 00:34:16.086 [2024-07-16 00:17:50.485641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.485702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.485793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.485820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.485912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.485938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.486922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.486999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.487963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.487990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.488972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.488998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.489787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.489816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.087 qpair failed and we were unable to recover it. 00:34:16.087 [2024-07-16 00:17:50.490771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.087 [2024-07-16 00:17:50.490796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.490888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.490914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.490993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.491915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.491941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.492903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.492931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.493966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.493994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.088 [2024-07-16 00:17:50.494863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.088 qpair failed and we were unable to recover it. 00:34:16.088 [2024-07-16 00:17:50.494954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.494982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.495888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.495917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.496967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.496995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.497897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.497922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.498918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.498943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.089 [2024-07-16 00:17:50.499640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.089 qpair failed and we were unable to recover it. 00:34:16.089 [2024-07-16 00:17:50.499727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.499753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.499848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.499873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.499958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.499984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.500901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.500927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.501930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.501956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.502911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.502940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.503894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.503919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.504009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.504036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.504126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.504165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.090 [2024-07-16 00:17:50.504250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.090 [2024-07-16 00:17:50.504279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.090 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.504923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.504950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.505896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.505995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.506890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.506989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.507939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.507966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.508841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.508868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.509000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.509040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.509120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.509152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.091 [2024-07-16 00:17:50.509244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.091 [2024-07-16 00:17:50.509271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.091 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.509366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.509393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.509482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.509508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.509631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.509659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.509747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.509775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.509906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.509932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.092 [2024-07-16 00:17:50.510956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.092 [2024-07-16 00:17:50.510997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.092 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.511898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.511999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.512824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.512960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.513003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.513084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.513109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.513221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.513248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.513358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.513386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.513489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.360 [2024-07-16 00:17:50.513515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.360 qpair failed and we were unable to recover it. 00:34:16.360 [2024-07-16 00:17:50.513619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.513645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.513753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.513779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.513912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.513968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.514901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.514929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.515907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.515989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.516886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.516939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.517871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.517898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.518952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.361 [2024-07-16 00:17:50.518978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.361 qpair failed and we were unable to recover it. 00:34:16.361 [2024-07-16 00:17:50.519121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.519905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.519935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.520943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.520970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.521894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.521980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.522916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.522941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.523909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.523936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.524023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.524049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.524131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.524169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.524268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.362 [2024-07-16 00:17:50.524296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.362 qpair failed and we were unable to recover it. 00:34:16.362 [2024-07-16 00:17:50.524411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.524439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.524546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.524572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.524725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.524779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.524878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.524907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.525872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.525989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.526910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.526938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.527866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.527979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.528838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.528975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.529002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.529135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.529181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.529269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.529296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.363 [2024-07-16 00:17:50.529427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.363 [2024-07-16 00:17:50.529456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.363 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.529544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.529571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.529656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.529681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.529778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.529806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.529912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.529938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.530972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.530997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.531880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.531907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.532882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.532989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.533015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.533122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.533168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.533259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.533285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.533368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.533394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.533490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.364 [2024-07-16 00:17:50.533517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.364 qpair failed and we were unable to recover it. 00:34:16.364 [2024-07-16 00:17:50.533625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.533650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.533729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.533756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.533834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.533860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.533957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.533991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.534887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.534987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.535921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.535948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.536898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.536924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.537945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.537984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.538068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.538094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.538228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.538254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.538347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.365 [2024-07-16 00:17:50.538374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.365 qpair failed and we were unable to recover it. 00:34:16.365 [2024-07-16 00:17:50.538484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.538510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.538671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.538719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.538849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.538889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.538974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.539949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.539986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.540920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.540947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.541901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.541927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.542938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.542986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.543925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.543951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.544067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.544093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.544226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.544264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.544341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.544365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.366 [2024-07-16 00:17:50.544460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.366 [2024-07-16 00:17:50.544488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.366 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.544581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.544606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.544708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.544734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.544817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.544843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.544924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.544950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.545950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.545977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.546886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.546944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.547930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.547956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.548950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.548979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.549069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.367 [2024-07-16 00:17:50.549094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.367 qpair failed and we were unable to recover it. 00:34:16.367 [2024-07-16 00:17:50.549195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.549925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.549952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.550967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.550995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.551854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.551883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.552949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.552975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.553912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.553941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.554036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.554062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.554162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.554189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.554278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.368 [2024-07-16 00:17:50.554305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.368 qpair failed and we were unable to recover it. 00:34:16.368 [2024-07-16 00:17:50.554423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.554448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.554547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.554574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.554668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.554693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.554785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.554811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.554900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.554926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.555962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.555988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.556921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.556947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.557969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.557995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.558951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.558977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.369 qpair failed and we were unable to recover it. 00:34:16.369 [2024-07-16 00:17:50.559090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.369 [2024-07-16 00:17:50.559117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.559233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.559259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.559386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.559425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.559503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.559529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.559623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.559662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.559821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.559874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.560081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.560134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.560347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.560396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.560597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.560651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.560781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.560845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.560956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.560985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.561218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.561378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.561522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.561711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.561867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.561992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.562184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.562348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.562467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.562673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.562839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.562896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.563940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.563979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.370 [2024-07-16 00:17:50.564839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.370 qpair failed and we were unable to recover it. 00:34:16.370 [2024-07-16 00:17:50.564946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.564988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.565926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.565952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.566891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.566919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.567928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.567953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.568962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.568989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.371 [2024-07-16 00:17:50.569806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.371 qpair failed and we were unable to recover it. 00:34:16.371 [2024-07-16 00:17:50.569916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.569954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.570877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.570905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.571899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.571924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.572927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.572953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.573939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.573979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.574886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.372 [2024-07-16 00:17:50.574948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.372 qpair failed and we were unable to recover it. 00:34:16.372 [2024-07-16 00:17:50.575035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.575929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.575956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.576907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.576941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.577966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.577992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.373 [2024-07-16 00:17:50.578908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.373 qpair failed and we were unable to recover it. 00:34:16.373 [2024-07-16 00:17:50.578987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.579883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.579910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.580936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.580962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.581906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.581934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.582897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.582922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.374 [2024-07-16 00:17:50.583689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.374 qpair failed and we were unable to recover it. 00:34:16.374 [2024-07-16 00:17:50.583776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.583802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.583885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.583912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.584914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.584939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.585912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.585940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.586960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.586986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.587930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.587968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.588044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.588070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.588153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.588179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.588259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.588285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.375 [2024-07-16 00:17:50.588366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.375 [2024-07-16 00:17:50.588391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.375 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.588472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.588497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.588581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.588605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.588686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.588711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.588789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.588815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.588894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.588920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.589906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.589930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.590915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.590943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.591889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.591915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.592895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.592921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.376 [2024-07-16 00:17:50.593002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.376 [2024-07-16 00:17:50.593026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.376 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.593900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.593926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.594970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.594999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.595902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.595986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.596903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.377 [2024-07-16 00:17:50.596929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.377 qpair failed and we were unable to recover it. 00:34:16.377 [2024-07-16 00:17:50.597011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.597924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.597951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.598951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.598978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.599904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.599930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.600917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.600943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.378 qpair failed and we were unable to recover it. 00:34:16.378 [2024-07-16 00:17:50.601622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.378 [2024-07-16 00:17:50.601648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.601730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.601756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.601842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.601868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.601959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.601988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.602967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.602994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.603919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.603996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.604905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.604931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.605926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.605950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.606026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.606050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.606133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.606164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.379 qpair failed and we were unable to recover it. 00:34:16.379 [2024-07-16 00:17:50.606252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.379 [2024-07-16 00:17:50.606278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.606382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.606496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.606605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.606742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.606858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.606987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.607906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.607932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.608900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.608938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.609868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.609986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.610951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.610978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.611086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.611267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.611391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.611500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.380 [2024-07-16 00:17:50.611617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.380 qpair failed and we were unable to recover it. 00:34:16.380 [2024-07-16 00:17:50.611694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.611719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.611796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.611821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.611902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.611928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.612947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.612971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.613910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.613939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.614891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.614945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.615966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.615990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.616068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.616093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.616206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.381 [2024-07-16 00:17:50.616247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.381 qpair failed and we were unable to recover it. 00:34:16.381 [2024-07-16 00:17:50.616335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.616362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.616449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.616475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.616584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.616639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.616763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.616817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.616898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.616922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.616999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.617917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.617942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.618905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.618931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.619936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.619961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.620041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.620066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.620152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.620178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.620288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.382 [2024-07-16 00:17:50.620354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.382 qpair failed and we were unable to recover it. 00:34:16.382 [2024-07-16 00:17:50.620439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.620465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.620593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.620637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.620739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.620804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.620896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.620934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.621912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.621938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.622020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.622045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.622126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.622160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.622240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.622265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.622374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.383 [2024-07-16 00:17:50.622433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.383 qpair failed and we were unable to recover it. 00:34:16.383 [2024-07-16 00:17:50.622520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.160836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.161133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.161211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.161493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.161534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.161747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.161775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.161988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.162019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.162345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.162386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.162633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.162681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.162906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.162949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.163093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.163150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.163338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.163399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.163557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.163613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.163849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.163882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.164115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.164152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.164384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.164436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.164637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.164665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.164860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.164911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.971 qpair failed and we were unable to recover it. 00:34:16.971 [2024-07-16 00:17:51.165966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.971 [2024-07-16 00:17:51.165994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.166217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.166271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.166366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.166394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.166555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.166608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.166729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.166789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.167842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.167884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.168091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.168327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.168502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.168633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.168893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.168982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.169008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.169209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.169239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.169401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.169446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.169633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.169681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.169838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.169887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.170067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.170126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.170285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.170314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.170434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.170489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.170710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.170738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.170860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.170891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.171100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.171314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.171474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.171667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.171842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.171999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.172049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.172238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.172310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.172511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.172577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.172745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.172796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.172944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.972 [2024-07-16 00:17:51.173746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.972 qpair failed and we were unable to recover it. 00:34:16.972 [2024-07-16 00:17:51.173891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.173941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.174064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.174120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.174295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.174345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.174505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.174547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.174698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.174748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.174850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.174876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.175962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.175989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.176103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.176160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.176364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.176414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.176630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.176682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.176772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.176797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.176956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.177015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.177190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.177243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.177290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b5320 (9): Bad file descriptor 00:34:16.973 [2024-07-16 00:17:51.177572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.177636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.177800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.177857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.178886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.178937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.179087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.179146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.179322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.179357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.179542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.179595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.179752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.179802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.179887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.179914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.180876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.180928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.181040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.181101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.973 [2024-07-16 00:17:51.181217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.973 [2024-07-16 00:17:51.181270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.973 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.181347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.181373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.181551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.181601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.181786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.181836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.181931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.181956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.182812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.182977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.183915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.183972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.184927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.184985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.185178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.185378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.185598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.185709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.185816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.185960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.186837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.186980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.187053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.187155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.187186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.187282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.187309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.187436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.974 [2024-07-16 00:17:51.187477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.974 qpair failed and we were unable to recover it. 00:34:16.974 [2024-07-16 00:17:51.187645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.187677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.187804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.187831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.187981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.188172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.188301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.188470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.188649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.188802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.188862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.189059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.189116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.189217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.189244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.189411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.189460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.189680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.189735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.189904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.189931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.190225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.190343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.190448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.190629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.190771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.190951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.191080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.191284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.191513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.191707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.191867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.191893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.192129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.192264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.192376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.192591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.192792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.192974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.193022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.193134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.193168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.193351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.193408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.193595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.193642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.975 [2024-07-16 00:17:51.193737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.975 [2024-07-16 00:17:51.193764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.975 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.193914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.193964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.194942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.195860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.195886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.196903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.196994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.197949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.197975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.198923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.198951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.199102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.199167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.199332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.199381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.976 qpair failed and we were unable to recover it. 00:34:16.976 [2024-07-16 00:17:51.199510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.976 [2024-07-16 00:17:51.199572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.199701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.199765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.199867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.199928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.200890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.200919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.201844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.201963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.202901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.202928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.203906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.203989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.204156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.204374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.204531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.204733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.204890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.204916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.205001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.205029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.205189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.205240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.205373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.205428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.205515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.205540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.977 qpair failed and we were unable to recover it. 00:34:16.977 [2024-07-16 00:17:51.205666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.977 [2024-07-16 00:17:51.205718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.205852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.205894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.205980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.206820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.206849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.207854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.207882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.208955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.208984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.209912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.209991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.210910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.210940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.211029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.211057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.211158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.211186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.211340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.211395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.978 qpair failed and we were unable to recover it. 00:34:16.978 [2024-07-16 00:17:51.211475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.978 [2024-07-16 00:17:51.211501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.211581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.211606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.211684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.211709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.211823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.211851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.211968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.211998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.212889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.212916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.213862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.213890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.214938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.214966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.215105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.215153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.215303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.215356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.215508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.215560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.215702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.215754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.215841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.215868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.216804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.216961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.217125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.217318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.217507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.217660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.979 qpair failed and we were unable to recover it. 00:34:16.979 [2024-07-16 00:17:51.217799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.979 [2024-07-16 00:17:51.217826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.217909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.217934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.218928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.218993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.219865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.219890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.220844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.220870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.221844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.221958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.222932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.980 [2024-07-16 00:17:51.222967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.980 qpair failed and we were unable to recover it. 00:34:16.980 [2024-07-16 00:17:51.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.223955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.223980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.224914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.224998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.225908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.225935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.226963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.226993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.981 qpair failed and we were unable to recover it. 00:34:16.981 [2024-07-16 00:17:51.227899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.981 [2024-07-16 00:17:51.227927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.228915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.228945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.229925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.229950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.230838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.230897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.231827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.231945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.232898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.232924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.233012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.233042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.233133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.982 [2024-07-16 00:17:51.233171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.982 qpair failed and we were unable to recover it. 00:34:16.982 [2024-07-16 00:17:51.233264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.233930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.233955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.234934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.234992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.235957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.235985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.236907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.236936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.237803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.237968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.238022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.238179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.238214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.238332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.238385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.983 [2024-07-16 00:17:51.238513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.983 [2024-07-16 00:17:51.238565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.983 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.238661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.238686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.238766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.238791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.238870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.238895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.239861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.239886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.240934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.240989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.241121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.241193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.241412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.241465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.241627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.241677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.241844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.241898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.242070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.242100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.242296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.242322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.242483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.242511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.242601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.242626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.242826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.242872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.243043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.243095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.243271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.243322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.243476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.243528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.243651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.243707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.243836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.243901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.244009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.244075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.244165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.244191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.244309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.244336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.984 [2024-07-16 00:17:51.244474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.984 [2024-07-16 00:17:51.244529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.984 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.244617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.244643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.244764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.244809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.244887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.244912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.245872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.245899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.246882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.246907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.247920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.247975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.248933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.248958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.249165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.249269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.249443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.249621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.249803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.249974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.250909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.985 qpair failed and we were unable to recover it. 00:34:16.985 [2024-07-16 00:17:51.250993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.985 [2024-07-16 00:17:51.251020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.251924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.251951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.252928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.252955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.253897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.253948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.254889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.254941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.255920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.255976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.986 qpair failed and we were unable to recover it. 00:34:16.986 [2024-07-16 00:17:51.256816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.986 [2024-07-16 00:17:51.256843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.256957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.256985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.257903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.257955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.258886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.258944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.259121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.259310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.259501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.259678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.259861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.259970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.260909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.260935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.261868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.261898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.987 [2024-07-16 00:17:51.262781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.987 qpair failed and we were unable to recover it. 00:34:16.987 [2024-07-16 00:17:51.262895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.262931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.263913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.263940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.264913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.264995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.265871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.265974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.266952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.266981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.267094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.267267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.267375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.267485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.988 [2024-07-16 00:17:51.267595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.988 qpair failed and we were unable to recover it. 00:34:16.988 [2024-07-16 00:17:51.267681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.267710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.267791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.267818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.267920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.267948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.268891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.268919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.269959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.269987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.270948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.270975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.271903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.271981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.272008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.272087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.272114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.272256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.272288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.272382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.989 [2024-07-16 00:17:51.272413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.989 qpair failed and we were unable to recover it. 00:34:16.989 [2024-07-16 00:17:51.272540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.272598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.272722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.272875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.272924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.273914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.273941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.274914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.274943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.275848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.275955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.276014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.276214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.276250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.276419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.276474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.276654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.276681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.276841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.276898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.277905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.277964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.278096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.278163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.278328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.278390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.278486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.278512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.278733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.278785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.278902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.278956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.279105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.279165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.990 [2024-07-16 00:17:51.279356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.990 [2024-07-16 00:17:51.279413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.990 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.279581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.279633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.279712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.279739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.279898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.279945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.280111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.280166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.280250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.280276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.280460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.280508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.280599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.280625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.280817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.280875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.281922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.281950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.282963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.282988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.283216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.283400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.283510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.283615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.283788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.283956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.284910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.284942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.991 [2024-07-16 00:17:51.285843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.991 [2024-07-16 00:17:51.285915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.991 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.286872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.286896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.287952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.287978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.288930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.288954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.289859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.289976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.290908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.290992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.291097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.291252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.291434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.291613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.992 qpair failed and we were unable to recover it. 00:34:16.992 [2024-07-16 00:17:51.291761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.992 [2024-07-16 00:17:51.291787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.291869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.291895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.291998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.292962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.292989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.293857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.293908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.294953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.294982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.295107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.295177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.295268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.295304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.295527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.295579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.295663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.295689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.295795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.295851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.296928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.296956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.297118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.297155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.297236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.993 [2024-07-16 00:17:51.297270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.993 qpair failed and we were unable to recover it. 00:34:16.993 [2024-07-16 00:17:51.297351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.297470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.297584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.297696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.297811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.297958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.297986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.298909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.298996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.299908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.299934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.300924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.300981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.301871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.301938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.994 [2024-07-16 00:17:51.302875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.994 qpair failed and we were unable to recover it. 00:34:16.994 [2024-07-16 00:17:51.302988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.303889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.303993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.304950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.304977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.305927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.305953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.306875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.306916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.307853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.307920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.308814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.308872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.309028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.309077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.995 qpair failed and we were unable to recover it. 00:34:16.995 [2024-07-16 00:17:51.309159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.995 [2024-07-16 00:17:51.309193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.309358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.309411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.309547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.309601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.309717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.309775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.309855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.309881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.309988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.310893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.310979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.311859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.311925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.312905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.312931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.313942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.313969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.996 [2024-07-16 00:17:51.314646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.996 [2024-07-16 00:17:51.314674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.996 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.314758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.314791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.314873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.314901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.314989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.315906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.315935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.316900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.316956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.317884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.317916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.997 qpair failed and we were unable to recover it. 00:34:16.997 [2024-07-16 00:17:51.318924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.997 [2024-07-16 00:17:51.318977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.319212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.319322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.319514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.319624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.319828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.319941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.320881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.320937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.321910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.321994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.322861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.322985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.323860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.323958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.324017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.324096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.324127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.324222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.324249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.324334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.324363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.998 qpair failed and we were unable to recover it. 00:34:16.998 [2024-07-16 00:17:51.324453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.998 [2024-07-16 00:17:51.324484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.324576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.324607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.324700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.324728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.324809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.324837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.324944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.324974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.325954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.325980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.326761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.326814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.327868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.327971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.328899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.328924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.329045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.329102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.329218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.329285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.329415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.329471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:16.999 [2024-07-16 00:17:51.329587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.999 [2024-07-16 00:17:51.329653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:16.999 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.329742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.329768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.329869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.329927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.330894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.330923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.331818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.331937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.332911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.332962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.333861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.333922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.334933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.334960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.335045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.335072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.335149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.000 [2024-07-16 00:17:51.335175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.000 qpair failed and we were unable to recover it. 00:34:17.000 [2024-07-16 00:17:51.335285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.335313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.335409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.335437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.335517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.335546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.335719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.335770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.335866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.335894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.336858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.336958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.337097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.337251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.337431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.337621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.337841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.337868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.338922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.338948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.339936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.339964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.340091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.340120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.340213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.001 [2024-07-16 00:17:51.340242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.001 qpair failed and we were unable to recover it. 00:34:17.001 [2024-07-16 00:17:51.340328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.340356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.340439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.340469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.340550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.340578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.340663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.340691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.340832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.340884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.341868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.341988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.342907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.342964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.343968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.343995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.002 [2024-07-16 00:17:51.344964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.002 [2024-07-16 00:17:51.344991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.002 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.345848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.345974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.346919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.346999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.347919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.347946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.348943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.348970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.349868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.349993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.350030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.350150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.003 [2024-07-16 00:17:51.350178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.003 qpair failed and we were unable to recover it. 00:34:17.003 [2024-07-16 00:17:51.350268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.350307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.350396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.350424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.350508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.350536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.350664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.350719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.350821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.350878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.351876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.351997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.352856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.352885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.353939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.353967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.354937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.354967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.355907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.004 [2024-07-16 00:17:51.356025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.004 qpair failed and we were unable to recover it. 00:34:17.004 [2024-07-16 00:17:51.356115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.356151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.356325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.356382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.356535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.356588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.356720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.356774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.356946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.357129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.357261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.357416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.357635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.357890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.357940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.358920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.358981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.359161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.359277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.359384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.359619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.359815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.359959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.360913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.360987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.361950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.361980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.362094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.362291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.362506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.362610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.362831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.362947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.005 [2024-07-16 00:17:51.363001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.005 qpair failed and we were unable to recover it. 00:34:17.005 [2024-07-16 00:17:51.363105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.363178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.363342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.363381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.363460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.363486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.363660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.363714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.363822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.363888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.364895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.364936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.365817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.365865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.366907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.366990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.367888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.367972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.368001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.006 [2024-07-16 00:17:51.368087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.006 [2024-07-16 00:17:51.368116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.006 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.368902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.368931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.369872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.369899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.370888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.370973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.371847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.371965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.372909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.372988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.373182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.373347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.373500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.373644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.373771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.373841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.374038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.374066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.374148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.374174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.374293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.374327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.374411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.007 [2024-07-16 00:17:51.374437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.007 qpair failed and we were unable to recover it. 00:34:17.007 [2024-07-16 00:17:51.374526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.374566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.374654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.374680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.374760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.374787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.374909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.374940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.375875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.375944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.376880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.376996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.377929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.377954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.378915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.378942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.008 [2024-07-16 00:17:51.379799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.008 qpair failed and we were unable to recover it. 00:34:17.008 [2024-07-16 00:17:51.379888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.379915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.379992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.380855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.380977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.381951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.381978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.382914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.382952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.383863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.383913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.384912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.384937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.385048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.385079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.385167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.385195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.385322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.009 [2024-07-16 00:17:51.385376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.009 qpair failed and we were unable to recover it. 00:34:17.009 [2024-07-16 00:17:51.385488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.385543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.385629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.385655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.385761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.385820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.385905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.385931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.386161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.386273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.386391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.386610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.386827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.386936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.387826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.387938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.388898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.388924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.389932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.389959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.390068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.390125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.390231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.390262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.390440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.390491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.390577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.390604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.390776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.010 [2024-07-16 00:17:51.390833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.010 qpair failed and we were unable to recover it. 00:34:17.010 [2024-07-16 00:17:51.391013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.391939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.391967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.392864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.392915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.393892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.393947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.394794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.394960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.395963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.395990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.396073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.396110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.396279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.396333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.396484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.396534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.396689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.396744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.396894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.396947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.397025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.397056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.011 [2024-07-16 00:17:51.397147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.011 [2024-07-16 00:17:51.397176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.011 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.397889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.397976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.398891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.398921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.399959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.399988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.400077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.400114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.400213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.400241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.400364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.400424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.400586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.400636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.400801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.400858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.401069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.401332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.401535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.401727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.401832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.401986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.402147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.402263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.402454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.402640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.402800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.402862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.403933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.403992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.404183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.404212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.012 [2024-07-16 00:17:51.404388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.012 [2024-07-16 00:17:51.404416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.012 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.404500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.404527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.404633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.404689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.404847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.404901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.404983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.405162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.405411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.405616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.405809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.405944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.405973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.406177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.406302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.406545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.406738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.406852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.406957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.407871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.407924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.408889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.408973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.409942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.409971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.410888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.410917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.411002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.411030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.411113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.013 [2024-07-16 00:17:51.411149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.013 qpair failed and we were unable to recover it. 00:34:17.013 [2024-07-16 00:17:51.411228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.411255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.411337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.411366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.411530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.411560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.411655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.411720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.411834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.411895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.411974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.412904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.412989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.413921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.413951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.414894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.414922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.415876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.415938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.014 [2024-07-16 00:17:51.416746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.014 qpair failed and we were unable to recover it. 00:34:17.014 [2024-07-16 00:17:51.416832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.416861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.416946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.416974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.417919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.417948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.418909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.418937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.419969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.419996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.420082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.420113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.420210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.420242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.420326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.420355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.015 [2024-07-16 00:17:51.420552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.015 [2024-07-16 00:17:51.420579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.015 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.420772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.420798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.420884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.420916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.421862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.421926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.422875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.422904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.423881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.423971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.424949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.424978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.425911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.425937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.426134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.426169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.016 qpair failed and we were unable to recover it. 00:34:17.016 [2024-07-16 00:17:51.426250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.016 [2024-07-16 00:17:51.426277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.426385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.426491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.426598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.426754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.426914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.426994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.427890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.427918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.428901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.428987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.429950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.429979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.430933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.430995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.431122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.017 [2024-07-16 00:17:51.431175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.017 qpair failed and we were unable to recover it. 00:34:17.017 [2024-07-16 00:17:51.431261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.431905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.431985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.432888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.432917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.433899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.433983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.434840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.434892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.435874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.435977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.018 [2024-07-16 00:17:51.436636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.018 qpair failed and we were unable to recover it. 00:34:17.018 [2024-07-16 00:17:51.436720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.436749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.436841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.436870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.437901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.437956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.438897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.438956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.439886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.439973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.440877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.440929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.019 [2024-07-16 00:17:51.441014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.019 [2024-07-16 00:17:51.441043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.019 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.441906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.441936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.442934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.442962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.443953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.443981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.444934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.444990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.020 [2024-07-16 00:17:51.445947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.020 [2024-07-16 00:17:51.445983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.020 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.446845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.446957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.447898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.447926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.448886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.448992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.449948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.449975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.450911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.450941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.451032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.021 [2024-07-16 00:17:51.451069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.021 qpair failed and we were unable to recover it. 00:34:17.021 [2024-07-16 00:17:51.451170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.451896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.451949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.452857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.452896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.453943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.453970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.454968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.454997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.455895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.455969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.456082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.456215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.456321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.456484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.022 [2024-07-16 00:17:51.456595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.022 qpair failed and we were unable to recover it. 00:34:17.022 [2024-07-16 00:17:51.456673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.456700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.456784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.456809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.456890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.456917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.457847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.457987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.458834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.458950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.459970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.459998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.460918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.460948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.023 [2024-07-16 00:17:51.461128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.023 [2024-07-16 00:17:51.461201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.023 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.461342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.461395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.461476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.461501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.461620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.461675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.461840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.461899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.462906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.462932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.463896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.463923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.464005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.464030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.464150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.464180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.024 [2024-07-16 00:17:51.464292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.024 [2024-07-16 00:17:51.464352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.024 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.464473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.464527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.464622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.464649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.464731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.464758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.464839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.464864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.464971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.465118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.465270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.465449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.465624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.465809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.465864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.466084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.466312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.466496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.466670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.466835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.466986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.467191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.467493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.467630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.467777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.467968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.467996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.468951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.468979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.469095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.469150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.469239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.469267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.469372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.469429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.469563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.469613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.469843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.469871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.470849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.470971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.299 [2024-07-16 00:17:51.471016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.299 qpair failed and we were unable to recover it. 00:34:17.299 [2024-07-16 00:17:51.471123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.471196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.471431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.471458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.471582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.471643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.471773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.471825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.471906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.471931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.472906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.472953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.473911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.473993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.474968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.474997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.475858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.475915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.476936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.476995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.477889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.477978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.478918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.478951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.479042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.479067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.479153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.479180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.479258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.479284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.300 [2024-07-16 00:17:51.479366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.300 [2024-07-16 00:17:51.479401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.300 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.479491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.479519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.479644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.479696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.479804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.479874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.479967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.479996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.480965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.480991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.481895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.481922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.482911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.482938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.483904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.483929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.484843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.484966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.485922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.485989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.486872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.486925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.487893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.487920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.488005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.301 [2024-07-16 00:17:51.488030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.301 qpair failed and we were unable to recover it. 00:34:17.301 [2024-07-16 00:17:51.488109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.488897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.488985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.489971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.489998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.490842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.490900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.491105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.491287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.491488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.491691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.491860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.491968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.492880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.492992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.493875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.493991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.494133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.494314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.494522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.494706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.494857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.494888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.495904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.302 [2024-07-16 00:17:51.495966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.302 qpair failed and we were unable to recover it. 00:34:17.302 [2024-07-16 00:17:51.496056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.496188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.496411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.496618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.496768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.496876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.496912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.497954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.497979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.498907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.498935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.499894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.499925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.500969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.500996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.501889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.501915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.502920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.502975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.503899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.503926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.504005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.504032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.504153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.303 [2024-07-16 00:17:51.504209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.303 qpair failed and we were unable to recover it. 00:34:17.303 [2024-07-16 00:17:51.504299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.504324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.504428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.504488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.504600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.504661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.504825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.504852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.505921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.505949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.506916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.506944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.507875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.507918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.508887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.508940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.509921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.509945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.510931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.510957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.511767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.511822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.512024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.512056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.512146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.512174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.512254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.512282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.512390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.512448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.304 qpair failed and we were unable to recover it. 00:34:17.304 [2024-07-16 00:17:51.512588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.304 [2024-07-16 00:17:51.512639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.512720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.512746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.512870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.512925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.513952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.513980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.514908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.514986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.515965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.515994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.516935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.516962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.517898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.517980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.518900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.518959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.305 qpair failed and we were unable to recover it. 00:34:17.305 [2024-07-16 00:17:51.519722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.305 [2024-07-16 00:17:51.519749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.519833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.519862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.519951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.519982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.520893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.520920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.521967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.521996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.522959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.522988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.523913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.523940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.524841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.524947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.525961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.525987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.526941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.526969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.527910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.306 [2024-07-16 00:17:51.527939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.306 qpair failed and we were unable to recover it. 00:34:17.306 [2024-07-16 00:17:51.528036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.528900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.528936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.529886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.529924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.530927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.530957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.531888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.531917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.532883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.532940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.533958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.533984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.534923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.534950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.535036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.535065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.535154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.535183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.535263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.535290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.307 [2024-07-16 00:17:51.535375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.307 [2024-07-16 00:17:51.535402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.307 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.535480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.535507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.535587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.535614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.535696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.535723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.535805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.535834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.535914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.535944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.536900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.536983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.537899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.537931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.538910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.538937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.539907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.539933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.540899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.540926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.541914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.541941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.542022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.542048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.542127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.542162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.542248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.542277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.542362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.308 [2024-07-16 00:17:51.542388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.308 qpair failed and we were unable to recover it. 00:34:17.308 [2024-07-16 00:17:51.542470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.542504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.542592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.542620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.542702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.542731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.542818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.542848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.542936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.542965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.543904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.543931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.544889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.544973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.545876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.545903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.546887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.546947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.547950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.547977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.548901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.548931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.549868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.549905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.550003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.550032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.550115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.309 [2024-07-16 00:17:51.550150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.309 qpair failed and we were unable to recover it. 00:34:17.309 [2024-07-16 00:17:51.550232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.550919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.550949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.551943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.551979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.552901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.552981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.553902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.553991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.554930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.554955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.555865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.555921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.556884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.556977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.557004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.557098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.557125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.557217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.557248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.310 qpair failed and we were unable to recover it. 00:34:17.310 [2024-07-16 00:17:51.557342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.310 [2024-07-16 00:17:51.557372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.557456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.557482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.557562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.557588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.557669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.557693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.557785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.557819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.557914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.557943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.558960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.558987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.559187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.559348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.559513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.559688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.559855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.559957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.560938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.560965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.561938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.561966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.562895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.562989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.563876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.563937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.564862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.564890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.565900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.565926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.566042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.566102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.566214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.566282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.311 [2024-07-16 00:17:51.566371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.311 [2024-07-16 00:17:51.566397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.311 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.566546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.566599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.566764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.566802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.566906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.566932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.567830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.567888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.568864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.568922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.569089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.569279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.569494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.569668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.569885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.569991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.570772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.570972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.571908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.571996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.572952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.572980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.573961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.573986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.574862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.574889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.575911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.575995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.576036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.576211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.576266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.576429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.576466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.576590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.576650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.312 qpair failed and we were unable to recover it. 00:34:17.312 [2024-07-16 00:17:51.576741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.312 [2024-07-16 00:17:51.576767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.576855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.576881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.576978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.577938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.577963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.578873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.578899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.579801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.579826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.580974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.580999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.581915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.581940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.582900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.582987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.313 [2024-07-16 00:17:51.583903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.313 qpair failed and we were unable to recover it. 00:34:17.313 [2024-07-16 00:17:51.583981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.584939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.584999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.585930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.585994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.586917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.586971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.587933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.587960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.588961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.588991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.589897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.589977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.590887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.590914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.591030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.591152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.591263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.591427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.314 [2024-07-16 00:17:51.591597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.314 qpair failed and we were unable to recover it. 00:34:17.314 [2024-07-16 00:17:51.591679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.591705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.591782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.591819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.591951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.591984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.592931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.592967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.593958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.593986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.594964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.594992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.595916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.595994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.596944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.596972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.597884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.597997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.598892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.598953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.599040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.599069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.599160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.599189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.315 qpair failed and we were unable to recover it. 00:34:17.315 [2024-07-16 00:17:51.599273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.315 [2024-07-16 00:17:51.599299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.599385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.599416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.599533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.599561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.599649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.599680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.599767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.599798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.599884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.599912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.600926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.600983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.601115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.601254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.601466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.601690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.601815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.601997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.602930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.602959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.603934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.603960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.604951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.604977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.605056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.316 [2024-07-16 00:17:51.605083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.316 qpair failed and we were unable to recover it. 00:34:17.316 [2024-07-16 00:17:51.605166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.605925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.605952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.606872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.606901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.607946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.607975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.608922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.608951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.609907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.609936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.610926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.610953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.611903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.611930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.317 [2024-07-16 00:17:51.612591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.317 qpair failed and we were unable to recover it. 00:34:17.317 [2024-07-16 00:17:51.612672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.612698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.612780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.612808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.612887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.612914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.612993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.613906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.613933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.614929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.614956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.615917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.615996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.616908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.616934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.617883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.617995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.618873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.618902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.318 qpair failed and we were unable to recover it. 00:34:17.318 [2024-07-16 00:17:51.619926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.318 [2024-07-16 00:17:51.619959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.620891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.620980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.621965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.621993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.622904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.622993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.623956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.623989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.624967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.624996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.625966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.625993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.626889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.319 [2024-07-16 00:17:51.626977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.319 [2024-07-16 00:17:51.627006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.319 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.627872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.627992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.628940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.628968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.629879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.629906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.630889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.630916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.631911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.631993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.632926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.632981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.633906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.633987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.320 [2024-07-16 00:17:51.634824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.320 qpair failed and we were unable to recover it. 00:34:17.320 [2024-07-16 00:17:51.634930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.634991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.635878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.635991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.636964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.636992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.637802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.637935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.638174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.638376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.638510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.638681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.638855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.638905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.639910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.639989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.640898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.640927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.641936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.641965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.642912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.642941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.643021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.643048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.643125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.643157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.643259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.643318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.321 qpair failed and we were unable to recover it. 00:34:17.321 [2024-07-16 00:17:51.643425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.321 [2024-07-16 00:17:51.643484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.643589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.643644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.643761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.643818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.643896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.643922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.644950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.644976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.645941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.645970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.646934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.646960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.647853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.647880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.648842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.648904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.649941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.649970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.650893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.650922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.651924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.651953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.652939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.652995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.653081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.653111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.653228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.653291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.653375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.653402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.653488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.653516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.322 [2024-07-16 00:17:51.653621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.322 [2024-07-16 00:17:51.653687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.322 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.653799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.653862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.653948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.653976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.654964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.654991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.655911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.655995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.656855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.656978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.657885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.657945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.658902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.658984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.659920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.659990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.660893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.660922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.661925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.661985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.662824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.662879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.663907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.663987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.664014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.664097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.323 [2024-07-16 00:17:51.664125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.323 qpair failed and we were unable to recover it. 00:34:17.323 [2024-07-16 00:17:51.664247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.664395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.664540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.664669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.664809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.664921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.664948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.665842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.665883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.666887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.666980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.667933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.667962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.668926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.668967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.669858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.669884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.670887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.670916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.671899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.671984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.672943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.672987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.324 [2024-07-16 00:17:51.673763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.324 qpair failed and we were unable to recover it. 00:34:17.324 [2024-07-16 00:17:51.673873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.673936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.674905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.674933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.675882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.675985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.676884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.676937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.677943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.677999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.678188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.678215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.678423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.678451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.678642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.678690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.678859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.678922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.679892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.679946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.680901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.680979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.681910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.681937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.682939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.682968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.683901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.683929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.325 [2024-07-16 00:17:51.684786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.325 qpair failed and we were unable to recover it. 00:34:17.325 [2024-07-16 00:17:51.684870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.684896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.685070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.685178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.685377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.685646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.685755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.685993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.686136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.686327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.686517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.686721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.686886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.686939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.687046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.687101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.687345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.687511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.687566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.687746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.687801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.687913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.687965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.688904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.688931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.689955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.689982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.690907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.690936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.691899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.691979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.692905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.692935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.693849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.693953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.694857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.694980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.695809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.326 qpair failed and we were unable to recover it. 00:34:17.326 [2024-07-16 00:17:51.695954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.326 [2024-07-16 00:17:51.696007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.696897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.696925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.697894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.697976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.698848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.698946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.699870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.699917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.700871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.700917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.701879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.701937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.702925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.702952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.703037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.703066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.703169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.327 [2024-07-16 00:17:51.703197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.327 qpair failed and we were unable to recover it. 00:34:17.327 [2024-07-16 00:17:51.703282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.703398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.703525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.703634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.703742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.703883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.703944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.704822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.704885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.705868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.705923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.706092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.706306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.706424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.706585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.706816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.706978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.707891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.707918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.708876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.708936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.709934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.709962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.710838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.710896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.711023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.711080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.711289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.711345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.711524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.711554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.711675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.711737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.711855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.711914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.712096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.712283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.712445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.712686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.712868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.712994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.713940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.328 [2024-07-16 00:17:51.713989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.328 qpair failed and we were unable to recover it. 00:34:17.328 [2024-07-16 00:17:51.714120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.714169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.714339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.714390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.714512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.714559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.714693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.714748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.714882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.714934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.715903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.715931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.716867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.716993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.717955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.717982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.718946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.718988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.719847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.719961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.720941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.720971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.721870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.721975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.722869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.722925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.723029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.329 [2024-07-16 00:17:51.723090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.329 qpair failed and we were unable to recover it. 00:34:17.329 [2024-07-16 00:17:51.723182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.723925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.723987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.724873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.724902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.725931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.725959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.726923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.726950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.727912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.727941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.728954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.728987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.729848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.729895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.730965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.730993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.330 qpair failed and we were unable to recover it. 00:34:17.330 [2024-07-16 00:17:51.731916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.330 [2024-07-16 00:17:51.731943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.732888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.732915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.733926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.733958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.734905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.734932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.735945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.735972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.736938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.736967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.737912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.737939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.738901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.738984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.739932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.739959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.740840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.740957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.741014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.741129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.331 [2024-07-16 00:17:51.741189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.331 qpair failed and we were unable to recover it. 00:34:17.331 [2024-07-16 00:17:51.741348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.741402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.741564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.741591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.741734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.741762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.741929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.741981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.742102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.742379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.742576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.742747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.742860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.742989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.743900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.743988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.744937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.744996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.745905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.745933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.746898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.746928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.747912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.747939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.748922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.748950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.749899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.749983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.750012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.750099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.750128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.750226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.750252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.750330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.332 [2024-07-16 00:17:51.750357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.332 qpair failed and we were unable to recover it. 00:34:17.332 [2024-07-16 00:17:51.750435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.750461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.750563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.750620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.750709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.750744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.750824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.750852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.750938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.750965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.751965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.751992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.752899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.752925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.753908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.753936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.754918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.754948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.755919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.755977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.756911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.756938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.757898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.757924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.758833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.758863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.759025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.759053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.759131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.759165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.759280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.759331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.759441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.759503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.333 [2024-07-16 00:17:51.759581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.333 [2024-07-16 00:17:51.759608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.333 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.759734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.759779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.759886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.759946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.760904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.760932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.761930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.761985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.762858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.762977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.763948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.763976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.764935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.764963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.765920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.765946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.766910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.766937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.767863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.767893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.768050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.768100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.768193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.768220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.334 [2024-07-16 00:17:51.768300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.334 [2024-07-16 00:17:51.768327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.334 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.768415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.768441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.768526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.768553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.768637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.768664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.768751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.768778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.768889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.768930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.769893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.769922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.770869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.770898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.771840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.771887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.772942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.772969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.773873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.773924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.774895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.774945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.775031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.775059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.775344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.775387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.775548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.775600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.775689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.775717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.775849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.775904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.776890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.776945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.777923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.777953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.778064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.778257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.778374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.778485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.335 [2024-07-16 00:17:51.778598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.335 qpair failed and we were unable to recover it. 00:34:17.335 [2024-07-16 00:17:51.778685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.778715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.778802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.778830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.778907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.778933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.779917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.779945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.780874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.780986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.781872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.781901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.782930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.782992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.783897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.783958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.784950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.784977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.785833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.785958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.786845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.786896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.787061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.787113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.787276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.787343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.787516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.787553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.787707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.787757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.787918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.787972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.788049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.788076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.788161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.788189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.336 [2024-07-16 00:17:51.788298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.336 [2024-07-16 00:17:51.788352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.336 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.788428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.788455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.788533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.788560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.788712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.788762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.788838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.788865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.788953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.788983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.789946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.789974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.790855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.790963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.791863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.791908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.792877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.792905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.793012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.793074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.793251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.793279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.793459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.793514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.793671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.793717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.793851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.793897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.337 [2024-07-16 00:17:51.794763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.337 [2024-07-16 00:17:51.794789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.337 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.794867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.794894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.794986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.795896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.795947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.796914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.796941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.797063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.797114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.797294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.797321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.797488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.797515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.797633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.797688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.797846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.797900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.798072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.798381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.798605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.798715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.798827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.798942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.799870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.799896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.800891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.800942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.801951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.801978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.802967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.611 [2024-07-16 00:17:51.802994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.611 qpair failed and we were unable to recover it. 00:34:17.611 [2024-07-16 00:17:51.803103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.803932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.803972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.804857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.804916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.805971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.805998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.806917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.806973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.807136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.807266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.807482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.807736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.807843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.807959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.808094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.808303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.808439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.808651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.808837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.808885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.809074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.809126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.809243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.809270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.809415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.809468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.809645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.809695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.809845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.809897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.810007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.810061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.810274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.810323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.810408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.810434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.810695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.810865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.810921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.811920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.811947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.812059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.812085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.812172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.812199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.812366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.812393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.812472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.612 [2024-07-16 00:17:51.812499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.612 qpair failed and we were unable to recover it. 00:34:17.612 [2024-07-16 00:17:51.812611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.812665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.812813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.812863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.812943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.812973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.813804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.813977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.814935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.814993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.815900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.815928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.816084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.816278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.816492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1383523 Killed "${NVMF_APP[@]}" "$@" 00:34:17.613 [2024-07-16 00:17:51.816595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.816783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.816914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.816962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:17.613 [2024-07-16 00:17:51.817074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.817214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:17.613 [2024-07-16 00:17:51.817392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:17.613 [2024-07-16 00:17:51.817543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.817704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.817912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.817960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.613 [2024-07-16 00:17:51.818043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.818923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.818950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.819887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.819913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.820055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.820193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.820327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.820486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.613 [2024-07-16 00:17:51.820619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.613 qpair failed and we were unable to recover it. 00:34:17.613 [2024-07-16 00:17:51.820772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.820824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.820910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.820936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1383935 00:34:17.614 [2024-07-16 00:17:51.821681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1383935 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.821932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.821965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1383935 ']' 00:34:17.614 [2024-07-16 00:17:51.822065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.822184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.822307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:17.614 [2024-07-16 00:17:51.822453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.614 [2024-07-16 00:17:51.822608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.822757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 00:17:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.614 [2024-07-16 00:17:51.822887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.822914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.823967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.823996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.824894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.824923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.825942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.825972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.614 [2024-07-16 00:17:51.826946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.614 [2024-07-16 00:17:51.826972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.614 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.827848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.827891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.828894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.828982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.829901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.829987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.830929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.830972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.831877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.831906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.832878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.832977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.833886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.833974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.615 [2024-07-16 00:17:51.834712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.615 [2024-07-16 00:17:51.834739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.615 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.834842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.834874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.834968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.834997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.835969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.835998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.836904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.836994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.837887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.837978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.838917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.838945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.839884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.839912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.840954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.840981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.616 [2024-07-16 00:17:51.841794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.616 [2024-07-16 00:17:51.841820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.616 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.841909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.841937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.842961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.842990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.843898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.843924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.844910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.844937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.845894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.845987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.846930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.846963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.847897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.847992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.617 [2024-07-16 00:17:51.848020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.617 qpair failed and we were unable to recover it. 00:34:17.617 [2024-07-16 00:17:51.848108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.848951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.848980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.849893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.849922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.850963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.850997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.851928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.851954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.852890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.852917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.853898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.853983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.854910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.618 [2024-07-16 00:17:51.854942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.618 qpair failed and we were unable to recover it. 00:34:17.618 [2024-07-16 00:17:51.855044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.855887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.855914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.856955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.856982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.857890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.857919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.858857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.858973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.859920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.859947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.860970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.860996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.619 [2024-07-16 00:17:51.861932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.619 [2024-07-16 00:17:51.861962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.619 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.862899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.862987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.863947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.863974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.864895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.864923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.865950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.865977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.866901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.866931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.867939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.867968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.868058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.868089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.620 qpair failed and we were unable to recover it. 00:34:17.620 [2024-07-16 00:17:51.868187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.620 [2024-07-16 00:17:51.868214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.868887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.868914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.869962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.869988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.870904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.870931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.871893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.871991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.872956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.872983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873348] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:17.621 [2024-07-16 00:17:51.873416] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.621 [2024-07-16 00:17:51.873426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.873895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.873922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.874953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.874979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.621 [2024-07-16 00:17:51.875058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.621 [2024-07-16 00:17:51.875085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.621 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.875878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.875972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.876887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.876913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.877875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.877904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.878953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.878982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.879900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.879980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.880931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.880957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.881886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.881913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.882021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.622 [2024-07-16 00:17:51.882063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.622 qpair failed and we were unable to recover it. 00:34:17.622 [2024-07-16 00:17:51.882154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.882906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.882988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.883910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.883936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.884965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.884992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.885898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.885924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.886910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.886994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.887876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.887904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.888001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.888029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.888112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.888145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.888244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.888271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.888359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.623 [2024-07-16 00:17:51.888387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.623 qpair failed and we were unable to recover it. 00:34:17.623 [2024-07-16 00:17:51.888477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.888508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.888598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.888625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.888714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.888740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.888831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.888862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.888951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.888981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.889897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.889991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.890936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.890963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.891970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.891999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.892959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.892985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.893888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.893980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.894904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.894991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.895018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.895097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.895123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.895220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.895247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.895338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.624 [2024-07-16 00:17:51.895368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.624 qpair failed and we were unable to recover it. 00:34:17.624 [2024-07-16 00:17:51.895455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.895483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.895564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.895593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.895678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.895706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.895795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.895824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.895913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.895939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.896956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.896985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.897907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.897995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.898903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.898981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.899932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.899961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.900963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.900991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.901905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.901932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.902026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.902053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.902152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.902180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.625 qpair failed and we were unable to recover it. 00:34:17.625 [2024-07-16 00:17:51.902268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.625 [2024-07-16 00:17:51.902295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.902378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.902407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.902499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.902525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.902612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.902641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.902732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.902758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.902895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.902925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.903885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.903982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.904932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.904961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.905904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.905991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.906943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.906971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.907878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.907907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.908081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.908226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.908342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.908450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.626 [2024-07-16 00:17:51.908560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.626 qpair failed and we were unable to recover it. 00:34:17.626 [2024-07-16 00:17:51.908639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.908666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.908760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.908787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.908873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.908902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.908999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.909950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.909977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 EAL: No free 2048 kB hugepages reported on node 1 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.910898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.910925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.911876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.911985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.912854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.912992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.913934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.913961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.914953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.914981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.915098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.915229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.915352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.915476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.627 [2024-07-16 00:17:51.915590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.627 qpair failed and we were unable to recover it. 00:34:17.627 [2024-07-16 00:17:51.915681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.915709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.915801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.915828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.915923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.915951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.916901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.916988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.917924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.917951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.918907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.918934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.919914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.919996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.920902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.920931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.921946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.921974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.628 qpair failed and we were unable to recover it. 00:34:17.628 [2024-07-16 00:17:51.922903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.628 [2024-07-16 00:17:51.922930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.923948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.923977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.924914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.924941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.925927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.925954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.926886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.926987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.927953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.927981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.928894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.928976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.629 [2024-07-16 00:17:51.929002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.629 qpair failed and we were unable to recover it. 00:34:17.629 [2024-07-16 00:17:51.929121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.929920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.929946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.930893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.930920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.931955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.931983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.932966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.932994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.933907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.933998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.934911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.934939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.935893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.935923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.630 [2024-07-16 00:17:51.936009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.630 [2024-07-16 00:17:51.936037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.630 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.936969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.936997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.937955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.937984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.938861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.938977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.939950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.939978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.940918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.940945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.941912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.941990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.942948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.942975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.943057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.943084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.943160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.943187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.943278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.631 [2024-07-16 00:17:51.943305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.631 qpair failed and we were unable to recover it. 00:34:17.631 [2024-07-16 00:17:51.943402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.943429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.943549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.943576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.943657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.943684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.943767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.943796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.943882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.943910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.943994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.944937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.944964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.945013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:17.632 [2024-07-16 00:17:51.945050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.945077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.945855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.945888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.945985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.946933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.946960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.947952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.947979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.948926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.948954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.949043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.949071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.949195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.949240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.949383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.949413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.949504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.949533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.632 qpair failed and we were unable to recover it. 00:34:17.632 [2024-07-16 00:17:51.949621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.632 [2024-07-16 00:17:51.949655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.949760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.949788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.949891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.949922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.950906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.950933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.951938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.951964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.952951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.952983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.953892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.953990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.954939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.954966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.955921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.955948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.956971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.956998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.957091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.633 [2024-07-16 00:17:51.957120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.633 qpair failed and we were unable to recover it. 00:34:17.633 [2024-07-16 00:17:51.957242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.957391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.957531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.957671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.957781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.957899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.957926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.958863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.958889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.959908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.959939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.960926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.960954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.961043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.961071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.962950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.962977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.963954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.963981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.634 [2024-07-16 00:17:51.964744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.634 qpair failed and we were unable to recover it. 00:34:17.634 [2024-07-16 00:17:51.964843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.964873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.964963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.964990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.965925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.965952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.966884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.966911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.967900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.967988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.968900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.968927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.969917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.969943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.970048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.970076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.970177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.970204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.970297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.970324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.970414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.635 [2024-07-16 00:17:51.970441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.635 qpair failed and we were unable to recover it. 00:34:17.635 [2024-07-16 00:17:51.970531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.970558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.970646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.970673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.970751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.970778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.970888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.970914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.970998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.971884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.971915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.972948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.972977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.973952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.973981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.974736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.974763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.975846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.975883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.975976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.976961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.976988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.636 [2024-07-16 00:17:51.977792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.636 [2024-07-16 00:17:51.977817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.636 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.977900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.977927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.978944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.978971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.979908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.979994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.980943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.980971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.981915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.981943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.982889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.982917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.983926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.983952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.637 [2024-07-16 00:17:51.984032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.637 [2024-07-16 00:17:51.984058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.637 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.984882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.984909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.985907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.985935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.986969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.986998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.987956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.987984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.988927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.988954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.638 [2024-07-16 00:17:51.989793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.638 qpair failed and we were unable to recover it. 00:34:17.638 [2024-07-16 00:17:51.989878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.989910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.989999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.990929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.990958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.991864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.991999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.992889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.992915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.993914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.993943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.994039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.994067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.994158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.994187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.994275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.994301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.994405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.994432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.994529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.994556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.995636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.995671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.995816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.995843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.995953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.995980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.996871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.996897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.997620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.997653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.997750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.997779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.997884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.997912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.997999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.998886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.998915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.999003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.999030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.639 qpair failed and we were unable to recover it. 00:34:17.639 [2024-07-16 00:17:51.999124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.639 [2024-07-16 00:17:51.999179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:51.999906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:51.999982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.000886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.000912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.001947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.001974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.002953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.002980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.003943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.003971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.004822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.004854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.004959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.004988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.005956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.005983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.640 qpair failed and we were unable to recover it. 00:34:17.640 [2024-07-16 00:17:52.006888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.640 [2024-07-16 00:17:52.006916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.006999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.007903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.007990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.008941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.008968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.009968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.009995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.010086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.010113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.010212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.010242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.010327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.010355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.010441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.010469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.011892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.011923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.012893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.012974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.641 [2024-07-16 00:17:52.013709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.641 qpair failed and we were unable to recover it. 00:34:17.641 [2024-07-16 00:17:52.013790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.013817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.013902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.013929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.014947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.014974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.015894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.015921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.016958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.016985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.017126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.017159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.017252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.017282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.017365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.017392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.018173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.018206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.018324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.018352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.018438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.018466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.018545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.018572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.642 [2024-07-16 00:17:52.018662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.642 [2024-07-16 00:17:52.018691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.642 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.018779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.018806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.018898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.018925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.019013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.019040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.019128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.019166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.019908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.019940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.020887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.020914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.021900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.021985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.022968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.022995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.023938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.023966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.643 qpair failed and we were unable to recover it. 00:34:17.643 [2024-07-16 00:17:52.024760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.643 [2024-07-16 00:17:52.024787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.024873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.024901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.024980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.025950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.025977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.026971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.026998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.027915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.027957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.028845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.028979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.029931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.029957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.030066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.030093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.030187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.030215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.644 qpair failed and we were unable to recover it. 00:34:17.644 [2024-07-16 00:17:52.030322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.644 [2024-07-16 00:17:52.030350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.030449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.030476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.030555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.030581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.030677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.030718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.030805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.030834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.030928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.030955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.031898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.031982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.032893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.032920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.033880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.033910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.034905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.034932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.035015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.035043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.035127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.645 [2024-07-16 00:17:52.035160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.645 qpair failed and we were unable to recover it. 00:34:17.645 [2024-07-16 00:17:52.035265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.035966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.035996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036448] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.646 [2024-07-16 00:17:52.036479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b9[2024-07-16 00:17:52.036480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:34:17.646 at runtime. 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036498] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.646 [2024-07-16 00:17:52.036512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.646 [2024-07-16 00:17:52.036525] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.646 [2024-07-16 00:17:52.036558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:17.646 [2024-07-16 00:17:52.036670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 [2024-07-16 00:17:52.036616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:17.646 [2024-07-16 00:17:52.036638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:17.646 [2024-07-16 00:17:52.036786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.036902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.036929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.037910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.037938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.646 [2024-07-16 00:17:52.038810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.646 [2024-07-16 00:17:52.038838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.646 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.038931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.038962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.039897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.039925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.040950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.040978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.041875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.041919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.042899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.042926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.647 qpair failed and we were unable to recover it. 00:34:17.647 [2024-07-16 00:17:52.043739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.647 [2024-07-16 00:17:52.043768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.043860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.043887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.043980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.044967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.044994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.045920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.045947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.046970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.046997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.047870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.047973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.648 [2024-07-16 00:17:52.048708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.648 qpair failed and we were unable to recover it. 00:34:17.648 [2024-07-16 00:17:52.048809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.048836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.048938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.048965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.049903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.049930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.050902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.050995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.051899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.051925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.052897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.052984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.053011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.053092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.053119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.649 [2024-07-16 00:17:52.053216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.649 [2024-07-16 00:17:52.053243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.649 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.053935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.053962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.054947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.054973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.055935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.055961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.056903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.056995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.057026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.057127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.057160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.057250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.057278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.057366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.057394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.057482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.650 [2024-07-16 00:17:52.057509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.650 qpair failed and we were unable to recover it. 00:34:17.650 [2024-07-16 00:17:52.057596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.057623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.057719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.057746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.057840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.057866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.057976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.058896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.058986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.059956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.059983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.060894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.060921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.061901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.061986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.062019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.062110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.062145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.062232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.062260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.062354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.651 [2024-07-16 00:17:52.062383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.651 qpair failed and we were unable to recover it. 00:34:17.651 [2024-07-16 00:17:52.062474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.062500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.062594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.062622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.062711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.062740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.062832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.062859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.062953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.062980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.063924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.063954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.064904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.064931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.065881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.065974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.066925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.066953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.067058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.067087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.067195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.067226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.652 [2024-07-16 00:17:52.067313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.652 [2024-07-16 00:17:52.067340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.652 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.067432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.067462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.067545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.067572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.067652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.067679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.067763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.067789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.067879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.067906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.068880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.068906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.069884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.069981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.070952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.070978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.071896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.071993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.072022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.072116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.072150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.072248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.072275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.653 [2024-07-16 00:17:52.072381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.653 [2024-07-16 00:17:52.072407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.653 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.072495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.072522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.072610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.072637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.072716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.072743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.072824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.072850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.072945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.072972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.073930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.073957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.074900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.074926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.075961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.075993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.654 [2024-07-16 00:17:52.076738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.654 qpair failed and we were unable to recover it. 00:34:17.654 [2024-07-16 00:17:52.076830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.076857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.076959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.076990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.077925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.077952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.078918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.078947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.079874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.079901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.080971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.080999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.081080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.081106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.081208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.081236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.081337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.081364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.081456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.655 [2024-07-16 00:17:52.081483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.655 qpair failed and we were unable to recover it. 00:34:17.655 [2024-07-16 00:17:52.081567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.081594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.081695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.081725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.081823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.081853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.081945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.081975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.082969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.082996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.083923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.083951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.084970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.084997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.085961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.085989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.086081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.086111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.086206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.086234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.086326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.086354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.656 [2024-07-16 00:17:52.086448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.656 [2024-07-16 00:17:52.086475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.656 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.086576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.086605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.086695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.086722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.086811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.086839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.086927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.086954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.087902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.087987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.088895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.088993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.089910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.089940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.090968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.090995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.091092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.091119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.657 [2024-07-16 00:17:52.091233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.657 [2024-07-16 00:17:52.091262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.657 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.091938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.091966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.092918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.092946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.093900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.093983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.094938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.094965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.095873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.095902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.096023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.096052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.096145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.658 [2024-07-16 00:17:52.096178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.658 qpair failed and we were unable to recover it. 00:34:17.658 [2024-07-16 00:17:52.096287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.096398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.096532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.096651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.096762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.096888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.096916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.097874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.097972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.098961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.098989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.099900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.099928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.100013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.659 [2024-07-16 00:17:52.100040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.659 qpair failed and we were unable to recover it. 00:34:17.659 [2024-07-16 00:17:52.100146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.100964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.100991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.101889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.101915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.102898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.102989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.103860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.103973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.104912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.104949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.105085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.105112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.105215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.105244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.105336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.105365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.105466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.660 [2024-07-16 00:17:52.105494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.660 qpair failed and we were unable to recover it. 00:34:17.660 [2024-07-16 00:17:52.105587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.105615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.105746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.105774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.105868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.105897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.105990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.106879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.106978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.107909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.107997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.661 [2024-07-16 00:17:52.108747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.661 [2024-07-16 00:17:52.108774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.661 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.108864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.108892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.108979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.109929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.109965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.928 [2024-07-16 00:17:52.110772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.928 [2024-07-16 00:17:52.110798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.928 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.110902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.110929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.111885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.111973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.112853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.112903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.113889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.113929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.114966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.114992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.115915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.115942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.929 [2024-07-16 00:17:52.116914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.929 [2024-07-16 00:17:52.116941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.929 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.117887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.117986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.118927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.118958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.119900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.119928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.120893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.120921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.121931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.121957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.930 [2024-07-16 00:17:52.122726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.930 [2024-07-16 00:17:52.122753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.930 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.122842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.122874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.122984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.123903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.123931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.124919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.124948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.125899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.125927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.126908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.126996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.127868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.127909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.931 [2024-07-16 00:17:52.128665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.931 qpair failed and we were unable to recover it. 00:34:17.931 [2024-07-16 00:17:52.128759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.128787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.128872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.128900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.128987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.129950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.129977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.130956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.130987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.131907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.131937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.132901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.132928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.932 [2024-07-16 00:17:52.133821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.932 qpair failed and we were unable to recover it. 00:34:17.932 [2024-07-16 00:17:52.133918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.133959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.134897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.134981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.135914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.135942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.136890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.136990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.137956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.137983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.138887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.138916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.139007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.139034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.139122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.139162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.139251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.139280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.139371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.933 [2024-07-16 00:17:52.139402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.933 qpair failed and we were unable to recover it. 00:34:17.933 [2024-07-16 00:17:52.139484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.139511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.139601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.139630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.139712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.139745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.139835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.139862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.139949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.139976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.140890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.140986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.141919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.141947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.142896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.142985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.143955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.143982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.144896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.144924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.934 [2024-07-16 00:17:52.145014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.934 [2024-07-16 00:17:52.145041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.934 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.145899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.145999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.146945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.146972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.147895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.147921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.148939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.148966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.149908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.149998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.150033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.150114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.150148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.150233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.150260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.150344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.935 [2024-07-16 00:17:52.150371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.935 qpair failed and we were unable to recover it. 00:34:17.935 [2024-07-16 00:17:52.150454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.150482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.150575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.150602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.150698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.150726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.150814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.150841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.150934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.150960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.151969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.151995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.152917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.152944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.153840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:17.936 [2024-07-16 00:17:52.153962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.153989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:17.936 [2024-07-16 00:17:52.154086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.154239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.936 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.154358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.936 [2024-07-16 00:17:52.154468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.936 [2024-07-16 00:17:52.154580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.154702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.154805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.154917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.154943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.155052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.155177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.155310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.155419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.936 [2024-07-16 00:17:52.155559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.936 qpair failed and we were unable to recover it. 00:34:17.936 [2024-07-16 00:17:52.155645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.155672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.155761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.155791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.155879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.155907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.155998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.156914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.156942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.157896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.157923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.158900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.158985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.159910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.159939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.160967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.160994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.161080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.161106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.161212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.161240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.161329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.937 [2024-07-16 00:17:52.161355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.937 qpair failed and we were unable to recover it. 00:34:17.937 [2024-07-16 00:17:52.161442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.161470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.161560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.161588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.161670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.161698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.161820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.161904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.161932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.162958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.162985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.163943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.163973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.164932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.164965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.165949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.165977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.166902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.938 qpair failed and we were unable to recover it. 00:34:17.938 [2024-07-16 00:17:52.166993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.938 [2024-07-16 00:17:52.167020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.167905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.167933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.168879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.168921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.169904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.169932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.170968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.170997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.171904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.171931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.172014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.939 [2024-07-16 00:17:52.172040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.939 qpair failed and we were unable to recover it. 00:34:17.939 [2024-07-16 00:17:52.172128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.172961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.172989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.173904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.173945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.174045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.174169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:17.940 [2024-07-16 00:17:52.174331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.940 [2024-07-16 00:17:52.174487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.940 [2024-07-16 00:17:52.174607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.174727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.174846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.174961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.174989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.175884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.175911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.176855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.176972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.940 [2024-07-16 00:17:52.177605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.940 qpair failed and we were unable to recover it. 00:34:17.940 [2024-07-16 00:17:52.177694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.177721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.177815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.177843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.177932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.177959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.178890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.178978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.179881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.179975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 A controller has encountered a failure and is being reset. 00:34:17.941 [2024-07-16 00:17:52.180840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f48a0000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.180969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.180999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4898000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.181931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.181958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a7990 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.182933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.182960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.183057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.183084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.183182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.183210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.183301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.183329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.183415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.941 [2024-07-16 00:17:52.183441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.941 qpair failed and we were unable to recover it. 00:34:17.941 [2024-07-16 00:17:52.183524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.183551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.183683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.183710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.183807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.183835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.183923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.183951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.184891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.184982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4890000b90 with addr=10.0.0.2, port=4420 00:34:17.942 qpair failed and we were unable to recover it. 00:34:17.942 [2024-07-16 00:17:52.185732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.942 [2024-07-16 00:17:52.185776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b5320 with addr=10.0.0.2, port=4420 00:34:17.942 [2024-07-16 00:17:52.185796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b5320 is same with the state(5) to be set 00:34:17.942 [2024-07-16 00:17:52.185823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b5320 (9): Bad file descriptor 00:34:17.942 [2024-07-16 00:17:52.185843] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:17.942 [2024-07-16 00:17:52.185858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:17.942 [2024-07-16 00:17:52.185875] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:17.942 Unable to reset the controller. 00:34:17.942 Malloc0 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 [2024-07-16 00:17:52.198841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 [2024-07-16 00:17:52.227086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.942 00:17:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1383630 00:34:18.873 Controller properly reset. 00:34:24.132 Initializing NVMe Controllers 00:34:24.132 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:24.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:24.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:24.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:24.132 Initialization complete. Launching workers. 00:34:24.132 Starting thread on core 1 00:34:24.132 Starting thread on core 2 00:34:24.132 Starting thread on core 3 00:34:24.132 Starting thread on core 0 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:24.132 00:34:24.132 real 0m10.747s 00:34:24.132 user 0m33.076s 00:34:24.132 sys 0m7.799s 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.132 ************************************ 00:34:24.132 END TEST nvmf_target_disconnect_tc2 00:34:24.132 ************************************ 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:24.132 rmmod nvme_tcp 00:34:24.132 rmmod nvme_fabrics 00:34:24.132 rmmod nvme_keyring 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1383935 ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1383935 ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1383935' 00:34:24.132 killing process with pid 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1383935 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.132 00:17:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.032 00:18:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:26.032 00:34:26.032 real 0m15.004s 00:34:26.032 user 0m58.372s 00:34:26.032 sys 0m9.944s 00:34:26.032 00:18:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.032 00:18:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:26.032 ************************************ 00:34:26.032 END TEST nvmf_target_disconnect 00:34:26.032 ************************************ 00:34:26.032 00:18:00 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:26.032 00:18:00 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.032 00:18:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.290 00:18:00 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:26.290 00:34:26.290 real 27m6.458s 00:34:26.290 user 75m38.965s 00:34:26.290 sys 6m6.874s 00:34:26.290 00:18:00 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:26.290 00:18:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.290 ************************************ 00:34:26.291 END TEST nvmf_tcp 00:34:26.291 ************************************ 00:34:26.291 00:18:00 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:26.291 00:18:00 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:26.291 00:18:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:26.291 00:18:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:26.291 00:18:00 -- common/autotest_common.sh@10 -- # set +x 00:34:26.291 ************************************ 00:34:26.291 START TEST spdkcli_nvmf_tcp 00:34:26.291 ************************************ 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:26.291 * Looking for test storage... 00:34:26.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1384876 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1384876 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1384876 ']' 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:26.291 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.291 [2024-07-16 00:18:00.715693] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:26.291 [2024-07-16 00:18:00.715804] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384876 ] 00:34:26.291 EAL: No free 2048 kB hugepages reported on node 1 00:34:26.291 [2024-07-16 00:18:00.777621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:26.549 [2024-07-16 00:18:00.866239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.549 [2024-07-16 00:18:00.866270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.549 00:18:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:26.549 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:26.549 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:26.549 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:26.549 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:26.549 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:26.549 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:26.549 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.549 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.549 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:26.549 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:26.550 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:26.550 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:26.550 ' 00:34:29.074 [2024-07-16 00:18:03.575180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.445 [2024-07-16 00:18:04.815366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:32.974 [2024-07-16 00:18:07.114529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:34.874 [2024-07-16 00:18:09.088671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:36.248 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:36.248 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:36.248 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:36.248 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:36.248 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:36.248 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:36.248 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:36.248 00:18:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:36.814 00:18:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:36.814 00:18:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.815 00:18:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:36.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:36.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:36.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:36.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:36.815 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:36.815 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:36.815 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:36.815 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:36.815 ' 00:34:42.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:42.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:42.165 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:42.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:42.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:42.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:42.166 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:42.166 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:42.166 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1384876 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1384876 ']' 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1384876 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1384876 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1384876' 00:34:42.166 killing process with pid 1384876 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1384876 00:34:42.166 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1384876 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1384876 ']' 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1384876 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1384876 ']' 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1384876 00:34:42.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1384876) - No such process 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1384876 is not found' 00:34:42.425 Process with pid 1384876 is not found 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:42.425 00:34:42.425 real 0m16.114s 00:34:42.425 user 0m34.272s 00:34:42.425 sys 0m0.818s 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:42.425 00:18:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.425 ************************************ 00:34:42.425 END TEST spdkcli_nvmf_tcp 00:34:42.425 ************************************ 00:34:42.425 00:18:16 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:42.425 00:18:16 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:42.425 00:18:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:42.425 00:18:16 -- common/autotest_common.sh@10 -- # set +x 00:34:42.425 ************************************ 00:34:42.425 START TEST nvmf_identify_passthru 00:34:42.425 ************************************ 00:34:42.425 00:18:16 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:42.425 * Looking for test storage... 00:34:42.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:42.425 00:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:42.425 00:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:42.425 00:18:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.425 00:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.425 00:18:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:42.425 00:18:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:42.425 00:18:16 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:42.425 00:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.331 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.331 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:44.331 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:44.331 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:44.332 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:44.332 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:44.332 Found net devices under 0000:08:00.0: cvl_0_0 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:44.332 Found net devices under 0000:08:00.1: cvl_0_1 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:44.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:34:44.332 00:34:44.332 --- 10.0.0.2 ping statistics --- 00:34:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.332 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:34:44.332 00:34:44.332 --- 10.0.0.1 ping statistics --- 00:34:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.332 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:44.332 00:18:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:44.332 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.332 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:44.332 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:44.333 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:44.333 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:44.333 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:34:44.333 00:18:18 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:84:00.0 00:34:44.333 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:34:44.333 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:34:44.333 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:34:44.333 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:44.333 00:18:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:44.333 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.517 00:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:34:48.517 00:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:34:48.517 00:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:48.517 00:18:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:48.517 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1388889 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:52.700 00:18:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1388889 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1388889 ']' 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:52.700 00:18:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.700 [2024-07-16 00:18:27.015520] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:52.700 [2024-07-16 00:18:27.015607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.700 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.700 [2024-07-16 00:18:27.078936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:52.700 [2024-07-16 00:18:27.165972] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.700 [2024-07-16 00:18:27.166029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.700 [2024-07-16 00:18:27.166046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.700 [2024-07-16 00:18:27.166060] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.700 [2024-07-16 00:18:27.166073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.700 [2024-07-16 00:18:27.166166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:52.700 [2024-07-16 00:18:27.166200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:52.700 [2024-07-16 00:18:27.166250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:52.700 [2024-07-16 00:18:27.166253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.700 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:52.700 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:52.700 00:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:52.700 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.700 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.700 INFO: Log level set to 20 00:34:52.700 INFO: Requests: 00:34:52.700 { 00:34:52.700 "jsonrpc": "2.0", 00:34:52.700 "method": "nvmf_set_config", 00:34:52.700 "id": 1, 00:34:52.700 "params": { 00:34:52.700 "admin_cmd_passthru": { 00:34:52.700 "identify_ctrlr": true 00:34:52.700 } 00:34:52.700 } 00:34:52.700 } 00:34:52.700 00:34:52.959 INFO: response: 00:34:52.959 { 00:34:52.959 "jsonrpc": "2.0", 00:34:52.959 "id": 1, 00:34:52.959 "result": true 00:34:52.959 } 00:34:52.959 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.959 00:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.959 INFO: Setting log level to 20 00:34:52.959 INFO: Setting log level to 20 00:34:52.959 INFO: Log level set to 20 00:34:52.959 INFO: Log level set to 20 00:34:52.959 INFO: Requests: 00:34:52.959 { 00:34:52.959 "jsonrpc": "2.0", 00:34:52.959 "method": "framework_start_init", 00:34:52.959 "id": 1 00:34:52.959 } 00:34:52.959 00:34:52.959 INFO: Requests: 00:34:52.959 { 00:34:52.959 "jsonrpc": "2.0", 00:34:52.959 "method": "framework_start_init", 00:34:52.959 "id": 1 00:34:52.959 } 00:34:52.959 00:34:52.959 [2024-07-16 00:18:27.306298] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:52.959 INFO: response: 00:34:52.959 { 00:34:52.959 "jsonrpc": "2.0", 00:34:52.959 "id": 1, 00:34:52.959 "result": true 00:34:52.959 } 00:34:52.959 00:34:52.959 INFO: response: 00:34:52.959 { 00:34:52.959 "jsonrpc": "2.0", 00:34:52.959 "id": 1, 00:34:52.959 "result": true 00:34:52.959 } 00:34:52.959 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.959 00:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.959 INFO: Setting log level to 40 00:34:52.959 INFO: Setting log level to 40 00:34:52.959 INFO: Setting log level to 40 00:34:52.959 [2024-07-16 00:18:27.316189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.959 00:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:52.959 00:18:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.959 00:18:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 Nvme0n1 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.263 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.263 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.263 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 [2024-07-16 00:18:30.184235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.263 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.263 [ 00:34:56.263 { 00:34:56.263 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:56.263 "subtype": "Discovery", 00:34:56.263 "listen_addresses": [], 00:34:56.263 "allow_any_host": true, 00:34:56.263 "hosts": [] 00:34:56.263 }, 00:34:56.263 { 00:34:56.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.263 "subtype": "NVMe", 00:34:56.263 "listen_addresses": [ 00:34:56.263 { 00:34:56.263 "trtype": "TCP", 00:34:56.263 "adrfam": "IPv4", 00:34:56.263 "traddr": "10.0.0.2", 00:34:56.263 "trsvcid": "4420" 00:34:56.263 } 00:34:56.263 ], 00:34:56.263 "allow_any_host": true, 00:34:56.263 "hosts": [], 00:34:56.263 "serial_number": "SPDK00000000000001", 00:34:56.263 "model_number": "SPDK bdev Controller", 00:34:56.263 "max_namespaces": 1, 00:34:56.263 "min_cntlid": 1, 00:34:56.263 "max_cntlid": 65519, 00:34:56.263 "namespaces": [ 00:34:56.263 { 00:34:56.263 "nsid": 1, 00:34:56.263 "bdev_name": "Nvme0n1", 00:34:56.263 "name": "Nvme0n1", 00:34:56.263 "nguid": "60633EB298344BCCA6BD186479B8DFB2", 00:34:56.263 "uuid": "60633eb2-9834-4bcc-a6bd-186479b8dfb2" 00:34:56.263 } 00:34:56.263 ] 00:34:56.263 } 00:34:56.263 ] 00:34:56.263 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:56.264 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:56.264 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:56.264 00:18:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:56.264 rmmod nvme_tcp 00:34:56.264 rmmod nvme_fabrics 00:34:56.264 rmmod nvme_keyring 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1388889 ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1388889 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1388889 ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1388889 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1388889 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1388889' 00:34:56.264 killing process with pid 1388889 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1388889 00:34:56.264 00:18:30 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1388889 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:57.634 00:18:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:57.634 00:18:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:57.634 00:18:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.169 00:18:34 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:00.169 00:35:00.169 real 0m17.356s 00:35:00.169 user 0m25.834s 00:35:00.169 sys 0m2.011s 00:35:00.169 00:18:34 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:00.169 00:18:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:00.169 ************************************ 00:35:00.169 END TEST nvmf_identify_passthru 00:35:00.169 ************************************ 00:35:00.169 00:18:34 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:00.169 00:18:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:00.169 00:18:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:00.169 00:18:34 -- common/autotest_common.sh@10 -- # set +x 00:35:00.169 ************************************ 00:35:00.169 START TEST nvmf_dif 00:35:00.169 ************************************ 00:35:00.169 00:18:34 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:00.169 * Looking for test storage... 00:35:00.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.169 00:18:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.169 00:18:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.169 00:18:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.169 00:18:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.169 00:18:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.169 00:18:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.169 00:18:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:00.169 00:18:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:00.169 00:18:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.169 00:18:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.170 00:18:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:00.170 00:18:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:00.170 00:18:34 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:00.170 00:18:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:35:01.548 Found 0000:08:00.0 (0x8086 - 0x159b) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:35:01.548 Found 0000:08:00.1 (0x8086 - 0x159b) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:35:01.548 Found net devices under 0000:08:00.0: cvl_0_0 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.548 00:18:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:35:01.549 Found net devices under 0000:08:00.1: cvl_0_1 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:35:01.549 00:35:01.549 --- 10.0.0.2 ping statistics --- 00:35:01.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.549 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:35:01.549 00:35:01.549 --- 10.0.0.1 ping statistics --- 00:35:01.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.549 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:01.549 00:18:35 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:02.483 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:35:02.483 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:02.483 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:35:02.483 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:35:02.483 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:35:02.483 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:35:02.483 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:35:02.483 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:35:02.483 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:35:02.483 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:35:02.483 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:35:02.483 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:35:02.483 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:35:02.483 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:35:02.483 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:35:02.483 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:35:02.483 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:02.483 00:18:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:02.483 00:18:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1391273 00:35:02.483 00:18:36 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1391273 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1391273 ']' 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.483 00:18:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:02.483 [2024-07-16 00:18:36.963629] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:02.483 [2024-07-16 00:18:36.963721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.483 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.742 [2024-07-16 00:18:37.027580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.742 [2024-07-16 00:18:37.114401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.742 [2024-07-16 00:18:37.114460] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.742 [2024-07-16 00:18:37.114478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.742 [2024-07-16 00:18:37.114491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.742 [2024-07-16 00:18:37.114503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.742 [2024-07-16 00:18:37.114540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:02.742 00:18:37 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:02.742 00:18:37 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.742 00:18:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:02.742 00:18:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:02.742 [2024-07-16 00:18:37.244350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.742 00:18:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:02.742 00:18:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:02.742 ************************************ 00:35:02.742 START TEST fio_dif_1_default 00:35:02.742 ************************************ 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:02.742 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.000 bdev_null0 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:03.000 [2024-07-16 00:18:37.284556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:03.000 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:03.001 { 00:35:03.001 "params": { 00:35:03.001 "name": "Nvme$subsystem", 00:35:03.001 "trtype": "$TEST_TRANSPORT", 00:35:03.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.001 "adrfam": "ipv4", 00:35:03.001 "trsvcid": "$NVMF_PORT", 00:35:03.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.001 "hdgst": ${hdgst:-false}, 00:35:03.001 "ddgst": ${ddgst:-false} 00:35:03.001 }, 00:35:03.001 "method": "bdev_nvme_attach_controller" 00:35:03.001 } 00:35:03.001 EOF 00:35:03.001 )") 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:03.001 "params": { 00:35:03.001 "name": "Nvme0", 00:35:03.001 "trtype": "tcp", 00:35:03.001 "traddr": "10.0.0.2", 00:35:03.001 "adrfam": "ipv4", 00:35:03.001 "trsvcid": "4420", 00:35:03.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.001 "hdgst": false, 00:35:03.001 "ddgst": false 00:35:03.001 }, 00:35:03.001 "method": "bdev_nvme_attach_controller" 00:35:03.001 }' 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:03.001 00:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:03.259 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:03.259 fio-3.35 00:35:03.259 Starting 1 thread 00:35:03.259 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.487 00:35:15.487 filename0: (groupid=0, jobs=1): err= 0: pid=1391446: Tue Jul 16 00:18:48 2024 00:35:15.487 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:35:15.487 slat (nsec): min=7346, max=59638, avg=9213.39, stdev=3578.19 00:35:15.487 clat (usec): min=40826, max=47659, avg=40999.18, stdev=427.15 00:35:15.487 lat (usec): min=40834, max=47698, avg=41008.40, stdev=427.60 00:35:15.487 clat percentiles (usec): 00:35:15.487 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:15.487 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:15.487 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:15.487 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:35:15.487 | 99.99th=[47449] 00:35:15.487 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:35:15.487 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:15.487 lat (msec) : 50=100.00% 00:35:15.487 cpu : usr=90.30%, sys=9.17%, ctx=13, majf=0, minf=244 00:35:15.487 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.487 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.487 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:15.487 00:35:15.487 Run status group 0 (all jobs): 00:35:15.487 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.487 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:35:15.488 real 0m11.035s 00:35:15.488 user 0m9.914s 00:35:15.488 sys 0m1.152s 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 ************************************ 00:35:15.488 END TEST fio_dif_1_default 00:35:15.488 ************************************ 00:35:15.488 00:18:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:15.488 00:18:48 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:15.488 00:18:48 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 ************************************ 00:35:15.488 START TEST fio_dif_1_multi_subsystems 00:35:15.488 ************************************ 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 bdev_null0 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 [2024-07-16 00:18:48.348982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 bdev_null1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.488 { 00:35:15.488 "params": { 00:35:15.488 "name": "Nvme$subsystem", 00:35:15.488 "trtype": "$TEST_TRANSPORT", 00:35:15.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.488 "adrfam": "ipv4", 00:35:15.488 "trsvcid": "$NVMF_PORT", 00:35:15.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.488 "hdgst": ${hdgst:-false}, 00:35:15.488 "ddgst": ${ddgst:-false} 00:35:15.488 }, 00:35:15.488 "method": "bdev_nvme_attach_controller" 00:35:15.488 } 00:35:15.488 EOF 00:35:15.488 )") 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.488 { 00:35:15.488 "params": { 00:35:15.488 "name": "Nvme$subsystem", 00:35:15.488 "trtype": "$TEST_TRANSPORT", 00:35:15.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.488 "adrfam": "ipv4", 00:35:15.488 "trsvcid": "$NVMF_PORT", 00:35:15.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.488 "hdgst": ${hdgst:-false}, 00:35:15.488 "ddgst": ${ddgst:-false} 00:35:15.488 }, 00:35:15.488 "method": "bdev_nvme_attach_controller" 00:35:15.488 } 00:35:15.488 EOF 00:35:15.488 )") 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:15.488 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:15.488 "params": { 00:35:15.488 "name": "Nvme0", 00:35:15.488 "trtype": "tcp", 00:35:15.488 "traddr": "10.0.0.2", 00:35:15.488 "adrfam": "ipv4", 00:35:15.488 "trsvcid": "4420", 00:35:15.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.488 "hdgst": false, 00:35:15.488 "ddgst": false 00:35:15.488 }, 00:35:15.488 "method": "bdev_nvme_attach_controller" 00:35:15.488 },{ 00:35:15.488 "params": { 00:35:15.488 "name": "Nvme1", 00:35:15.488 "trtype": "tcp", 00:35:15.488 "traddr": "10.0.0.2", 00:35:15.488 "adrfam": "ipv4", 00:35:15.488 "trsvcid": "4420", 00:35:15.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:15.488 "hdgst": false, 00:35:15.489 "ddgst": false 00:35:15.489 }, 00:35:15.489 "method": "bdev_nvme_attach_controller" 00:35:15.489 }' 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.489 00:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.489 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:15.489 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:15.489 fio-3.35 00:35:15.489 Starting 2 threads 00:35:15.489 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.449 00:35:25.449 filename0: (groupid=0, jobs=1): err= 0: pid=1392460: Tue Jul 16 00:18:59 2024 00:35:25.449 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10018msec) 00:35:25.449 slat (nsec): min=7653, max=29797, avg=8984.98, stdev=1951.60 00:35:25.449 clat (usec): min=615, max=45015, avg=40858.61, stdev=2595.59 00:35:25.449 lat (usec): min=623, max=45045, avg=40867.60, stdev=2595.65 00:35:25.449 clat percentiles (usec): 00:35:25.449 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:25.449 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:25.449 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:25.449 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:25.449 | 99.99th=[44827] 00:35:25.449 bw ( KiB/s): min= 384, max= 448, per=49.94%, avg=390.40, stdev=16.74, samples=20 00:35:25.449 iops : min= 96, max= 112, avg=97.60, stdev= 4.19, samples=20 00:35:25.449 lat (usec) : 750=0.41% 00:35:25.449 lat (msec) : 50=99.59% 00:35:25.449 cpu : usr=94.69%, sys=4.81%, ctx=14, majf=0, minf=62 00:35:25.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.449 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:25.449 filename1: (groupid=0, jobs=1): err= 0: pid=1392461: Tue Jul 16 00:18:59 2024 00:35:25.449 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:35:25.449 slat (nsec): min=7600, max=57899, avg=9093.55, stdev=2893.11 00:35:25.449 clat (usec): min=40822, max=43008, avg=40980.67, stdev=131.66 00:35:25.449 lat (usec): min=40830, max=43038, avg=40989.76, stdev=132.01 00:35:25.449 clat percentiles (usec): 00:35:25.449 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:25.449 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:25.449 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:25.449 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:35:25.449 | 99.99th=[43254] 00:35:25.449 bw ( KiB/s): min= 384, max= 416, per=49.68%, avg=388.80, stdev=11.72, samples=20 00:35:25.449 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:25.449 lat (msec) : 50=100.00% 00:35:25.449 cpu : usr=93.92%, sys=5.58%, ctx=11, majf=0, minf=227 00:35:25.449 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.449 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.449 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:25.449 00:35:25.449 Run status group 0 (all jobs): 00:35:25.449 READ: bw=781KiB/s (800kB/s), 390KiB/s-391KiB/s (399kB/s-401kB/s), io=7824KiB (8012kB), run=10007-10018msec 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.449 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:35:25.450 real 0m11.174s 00:35:25.450 user 0m19.884s 00:35:25.450 sys 0m1.328s 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 ************************************ 00:35:25.450 END TEST fio_dif_1_multi_subsystems 00:35:25.450 ************************************ 00:35:25.450 00:18:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:25.450 00:18:59 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:25.450 00:18:59 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 ************************************ 00:35:25.450 START TEST fio_dif_rand_params 00:35:25.450 ************************************ 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 bdev_null0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.450 [2024-07-16 00:18:59.554419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:25.450 { 00:35:25.450 "params": { 00:35:25.450 "name": "Nvme$subsystem", 00:35:25.450 "trtype": "$TEST_TRANSPORT", 00:35:25.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.450 "adrfam": "ipv4", 00:35:25.450 "trsvcid": "$NVMF_PORT", 00:35:25.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.450 "hdgst": ${hdgst:-false}, 00:35:25.450 "ddgst": ${ddgst:-false} 00:35:25.450 }, 00:35:25.450 "method": "bdev_nvme_attach_controller" 00:35:25.450 } 00:35:25.450 EOF 00:35:25.450 )") 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:25.450 "params": { 00:35:25.450 "name": "Nvme0", 00:35:25.450 "trtype": "tcp", 00:35:25.450 "traddr": "10.0.0.2", 00:35:25.450 "adrfam": "ipv4", 00:35:25.450 "trsvcid": "4420", 00:35:25.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.450 "hdgst": false, 00:35:25.450 "ddgst": false 00:35:25.450 }, 00:35:25.450 "method": "bdev_nvme_attach_controller" 00:35:25.450 }' 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.450 00:18:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.450 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:25.450 ... 00:35:25.450 fio-3.35 00:35:25.450 Starting 3 threads 00:35:25.450 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.011 00:35:32.011 filename0: (groupid=0, jobs=1): err= 0: pid=1393468: Tue Jul 16 00:19:05 2024 00:35:32.011 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(119MiB/5045msec) 00:35:32.011 slat (nsec): min=6475, max=35476, avg=13846.74, stdev=3708.97 00:35:32.011 clat (usec): min=4895, max=89805, avg=15849.87, stdev=11861.39 00:35:32.011 lat (usec): min=4903, max=89817, avg=15863.72, stdev=11861.31 00:35:32.011 clat percentiles (usec): 00:35:32.011 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[ 9765], 00:35:32.011 | 30.00th=[10421], 40.00th=[11994], 50.00th=[13698], 60.00th=[14353], 00:35:32.011 | 70.00th=[14746], 80.00th=[15139], 90.00th=[17171], 95.00th=[50594], 00:35:32.011 | 99.00th=[55837], 99.50th=[56886], 99.90th=[89654], 99.95th=[89654], 00:35:32.011 | 99.99th=[89654] 00:35:32.011 bw ( KiB/s): min=16896, max=31488, per=33.21%, avg=24299.30, stdev=4699.81, samples=10 00:35:32.011 iops : min= 132, max= 246, avg=189.80, stdev=36.71, samples=10 00:35:32.011 lat (msec) : 10=23.24%, 20=67.72%, 50=3.36%, 100=5.68% 00:35:32.011 cpu : usr=95.64%, sys=3.97%, ctx=11, majf=0, minf=97 00:35:32.011 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 issued rwts: total=951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.011 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.011 filename0: (groupid=0, jobs=1): err= 0: pid=1393469: Tue Jul 16 00:19:05 2024 00:35:32.011 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(123MiB/5004msec) 00:35:32.011 slat (nsec): min=6092, max=39784, avg=18385.87, stdev=5269.68 00:35:32.011 clat (usec): min=5187, max=92974, avg=15290.92, stdev=11775.16 00:35:32.011 lat (usec): min=5201, max=92989, avg=15309.30, stdev=11774.93 00:35:32.011 clat percentiles (usec): 00:35:32.011 | 1.00th=[ 5538], 5.00th=[ 6521], 10.00th=[ 8455], 20.00th=[ 9503], 00:35:32.011 | 30.00th=[10028], 40.00th=[11600], 50.00th=[13173], 60.00th=[13698], 00:35:32.011 | 70.00th=[14222], 80.00th=[14746], 90.00th=[16057], 95.00th=[50594], 00:35:32.011 | 99.00th=[55313], 99.50th=[55837], 99.90th=[92799], 99.95th=[92799], 00:35:32.011 | 99.99th=[92799] 00:35:32.011 bw ( KiB/s): min=17408, max=31744, per=34.19%, avg=25016.70, stdev=5016.97, samples=10 00:35:32.011 iops : min= 136, max= 248, avg=195.40, stdev=39.17, samples=10 00:35:32.011 lat (msec) : 10=30.20%, 20=61.12%, 50=3.16%, 100=5.51% 00:35:32.011 cpu : usr=95.52%, sys=4.02%, ctx=12, majf=0, minf=125 00:35:32.011 IO depths : 1=4.3%, 2=95.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.011 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.011 filename0: (groupid=0, jobs=1): err= 0: pid=1393470: Tue Jul 16 00:19:05 2024 00:35:32.011 read: IOPS=189, BW=23.6MiB/s (24.8MB/s)(119MiB/5042msec) 00:35:32.011 slat (nsec): min=6863, max=63438, avg=13693.23, stdev=4120.12 00:35:32.011 clat (usec): min=4727, max=96286, avg=15849.48, stdev=13218.88 00:35:32.011 lat (usec): min=4739, max=96294, avg=15863.18, stdev=13218.93 00:35:32.011 clat percentiles (usec): 00:35:32.011 | 1.00th=[ 5211], 5.00th=[ 6063], 10.00th=[ 7767], 20.00th=[ 9241], 00:35:32.011 | 30.00th=[ 9896], 40.00th=[11994], 50.00th=[13173], 60.00th=[13698], 00:35:32.011 | 70.00th=[14091], 80.00th=[14484], 90.00th=[46400], 95.00th=[52167], 00:35:32.011 | 99.00th=[54789], 99.50th=[56361], 99.90th=[95945], 99.95th=[95945], 00:35:32.011 | 99.99th=[95945] 00:35:32.011 bw ( KiB/s): min=18432, max=33280, per=33.24%, avg=24320.00, stdev=4298.97, samples=10 00:35:32.011 iops : min= 144, max= 260, avg=190.00, stdev=33.59, samples=10 00:35:32.011 lat (msec) : 10=32.21%, 20=57.50%, 50=3.46%, 100=6.82% 00:35:32.011 cpu : usr=95.36%, sys=4.23%, ctx=9, majf=0, minf=115 00:35:32.011 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.011 issued rwts: total=953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.011 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:32.011 00:35:32.011 Run status group 0 (all jobs): 00:35:32.011 READ: bw=71.5MiB/s (74.9MB/s), 23.6MiB/s-24.5MiB/s (24.7MB/s-25.7MB/s), io=361MiB (378MB), run=5004-5045msec 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.011 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 bdev_null0 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 [2024-07-16 00:19:05.598094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 bdev_null1 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 bdev_null2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.012 { 00:35:32.012 "params": { 00:35:32.012 "name": "Nvme$subsystem", 00:35:32.012 "trtype": "$TEST_TRANSPORT", 00:35:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.012 "adrfam": "ipv4", 00:35:32.012 "trsvcid": "$NVMF_PORT", 00:35:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.012 "hdgst": ${hdgst:-false}, 00:35:32.012 "ddgst": ${ddgst:-false} 00:35:32.012 }, 00:35:32.012 "method": "bdev_nvme_attach_controller" 00:35:32.012 } 00:35:32.012 EOF 00:35:32.012 )") 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.012 { 00:35:32.012 "params": { 00:35:32.012 "name": "Nvme$subsystem", 00:35:32.012 "trtype": "$TEST_TRANSPORT", 00:35:32.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.012 "adrfam": "ipv4", 00:35:32.012 "trsvcid": "$NVMF_PORT", 00:35:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.012 "hdgst": ${hdgst:-false}, 00:35:32.012 "ddgst": ${ddgst:-false} 00:35:32.012 }, 00:35:32.012 "method": "bdev_nvme_attach_controller" 00:35:32.012 } 00:35:32.012 EOF 00:35:32.012 )") 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.012 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.012 { 00:35:32.012 "params": { 00:35:32.012 "name": "Nvme$subsystem", 00:35:32.012 "trtype": "$TEST_TRANSPORT", 00:35:32.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.013 "adrfam": "ipv4", 00:35:32.013 "trsvcid": "$NVMF_PORT", 00:35:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.013 "hdgst": ${hdgst:-false}, 00:35:32.013 "ddgst": ${ddgst:-false} 00:35:32.013 }, 00:35:32.013 "method": "bdev_nvme_attach_controller" 00:35:32.013 } 00:35:32.013 EOF 00:35:32.013 )") 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:32.013 "params": { 00:35:32.013 "name": "Nvme0", 00:35:32.013 "trtype": "tcp", 00:35:32.013 "traddr": "10.0.0.2", 00:35:32.013 "adrfam": "ipv4", 00:35:32.013 "trsvcid": "4420", 00:35:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.013 "hdgst": false, 00:35:32.013 "ddgst": false 00:35:32.013 }, 00:35:32.013 "method": "bdev_nvme_attach_controller" 00:35:32.013 },{ 00:35:32.013 "params": { 00:35:32.013 "name": "Nvme1", 00:35:32.013 "trtype": "tcp", 00:35:32.013 "traddr": "10.0.0.2", 00:35:32.013 "adrfam": "ipv4", 00:35:32.013 "trsvcid": "4420", 00:35:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.013 "hdgst": false, 00:35:32.013 "ddgst": false 00:35:32.013 }, 00:35:32.013 "method": "bdev_nvme_attach_controller" 00:35:32.013 },{ 00:35:32.013 "params": { 00:35:32.013 "name": "Nvme2", 00:35:32.013 "trtype": "tcp", 00:35:32.013 "traddr": "10.0.0.2", 00:35:32.013 "adrfam": "ipv4", 00:35:32.013 "trsvcid": "4420", 00:35:32.013 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:32.013 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:32.013 "hdgst": false, 00:35:32.013 "ddgst": false 00:35:32.013 }, 00:35:32.013 "method": "bdev_nvme_attach_controller" 00:35:32.013 }' 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.013 00:19:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.013 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.013 ... 00:35:32.013 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.013 ... 00:35:32.013 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:32.013 ... 00:35:32.013 fio-3.35 00:35:32.013 Starting 24 threads 00:35:32.013 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.218 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394093: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10086msec) 00:35:44.218 slat (usec): min=11, max=153, avg=95.40, stdev=24.76 00:35:44.218 clat (msec): min=264, max=672, avg=419.43, stdev=54.35 00:35:44.218 lat (msec): min=264, max=672, avg=419.53, stdev=54.35 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 305], 5.00th=[ 372], 10.00th=[ 384], 20.00th=[ 384], 00:35:44.218 | 30.00th=[ 393], 40.00th=[ 397], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.218 | 70.00th=[ 422], 80.00th=[ 447], 90.00th=[ 460], 95.00th=[ 535], 00:35:44.218 | 99.00th=[ 592], 99.50th=[ 676], 99.90th=[ 676], 99.95th=[ 676], 00:35:44.218 | 99.99th=[ 676] 00:35:44.218 bw ( KiB/s): min= 127, max= 256, per=3.48%, avg=154.89, stdev=49.79, samples=19 00:35:44.218 iops : min= 31, max= 64, avg=38.68, stdev=12.47, samples=19 00:35:44.218 lat (msec) : 500=90.62%, 750=9.38% 00:35:44.218 cpu : usr=98.16%, sys=1.41%, ctx=23, majf=0, minf=53 00:35:44.218 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394094: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10134msec) 00:35:44.218 slat (usec): min=8, max=127, avg=87.21, stdev=20.53 00:35:44.218 clat (msec): min=261, max=599, avg=403.92, stdev=61.15 00:35:44.218 lat (msec): min=261, max=599, avg=404.00, stdev=61.16 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 262], 5.00th=[ 266], 10.00th=[ 359], 20.00th=[ 372], 00:35:44.218 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 397], 60.00th=[ 414], 00:35:44.218 | 70.00th=[ 418], 80.00th=[ 443], 90.00th=[ 464], 95.00th=[ 518], 00:35:44.218 | 99.00th=[ 550], 99.50th=[ 558], 99.90th=[ 600], 99.95th=[ 600], 00:35:44.218 | 99.99th=[ 600] 00:35:44.218 bw ( KiB/s): min= 127, max= 256, per=3.46%, avg=153.55, stdev=46.86, samples=20 00:35:44.218 iops : min= 31, max= 64, avg=38.35, stdev=11.74, samples=20 00:35:44.218 lat (msec) : 500=91.00%, 750=9.00% 00:35:44.218 cpu : usr=98.56%, sys=1.04%, ctx=18, majf=0, minf=55 00:35:44.218 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394095: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10111msec) 00:35:44.218 slat (usec): min=8, max=128, avg=35.56, stdev=33.42 00:35:44.218 clat (msec): min=239, max=618, avg=323.06, stdev=79.10 00:35:44.218 lat (msec): min=239, max=618, avg=323.09, stdev=79.12 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 268], 00:35:44.218 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 284], 00:35:44.218 | 70.00th=[ 380], 80.00th=[ 405], 90.00th=[ 447], 95.00th=[ 477], 00:35:44.218 | 99.00th=[ 502], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:35:44.218 | 99.99th=[ 617] 00:35:44.218 bw ( KiB/s): min= 112, max= 256, per=4.34%, avg=192.00, stdev=59.87, samples=20 00:35:44.218 iops : min= 28, max= 64, avg=48.00, stdev=14.97, samples=20 00:35:44.218 lat (msec) : 250=8.87%, 500=89.52%, 750=1.61% 00:35:44.218 cpu : usr=98.43%, sys=1.08%, ctx=53, majf=0, minf=64 00:35:44.218 IO depths : 1=1.4%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394096: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10127msec) 00:35:44.218 slat (usec): min=8, max=148, avg=65.21, stdev=33.45 00:35:44.218 clat (msec): min=42, max=506, avg=294.77, stdev=78.96 00:35:44.218 lat (msec): min=42, max=507, avg=294.83, stdev=78.97 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 43], 5.00th=[ 88], 10.00th=[ 255], 20.00th=[ 266], 00:35:44.218 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 292], 00:35:44.218 | 70.00th=[ 305], 80.00th=[ 380], 90.00th=[ 409], 95.00th=[ 418], 00:35:44.218 | 99.00th=[ 418], 99.50th=[ 472], 99.90th=[ 506], 99.95th=[ 506], 00:35:44.218 | 99.99th=[ 506] 00:35:44.218 bw ( KiB/s): min= 128, max= 384, per=4.77%, avg=211.20, stdev=71.29, samples=20 00:35:44.218 iops : min= 32, max= 96, avg=52.80, stdev=17.82, samples=20 00:35:44.218 lat (msec) : 50=2.94%, 100=2.57%, 250=3.31%, 500=90.81%, 750=0.37% 00:35:44.218 cpu : usr=98.39%, sys=1.20%, ctx=21, majf=0, minf=41 00:35:44.218 IO depths : 1=1.8%, 2=8.1%, 4=25.0%, 8=54.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394097: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10134msec) 00:35:44.218 slat (usec): min=8, max=147, avg=29.59, stdev=24.04 00:35:44.218 clat (msec): min=158, max=575, avg=288.80, stdev=53.20 00:35:44.218 lat (msec): min=158, max=575, avg=288.83, stdev=53.21 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 262], 00:35:44.218 | 30.00th=[ 268], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 279], 00:35:44.218 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 359], 95.00th=[ 388], 00:35:44.218 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 575], 99.95th=[ 575], 00:35:44.218 | 99.99th=[ 575] 00:35:44.218 bw ( KiB/s): min= 111, max= 256, per=4.91%, avg=217.55, stdev=57.29, samples=20 00:35:44.218 iops : min= 27, max= 64, avg=54.35, stdev=14.40, samples=20 00:35:44.218 lat (msec) : 250=10.71%, 500=86.07%, 750=3.21% 00:35:44.218 cpu : usr=98.66%, sys=0.97%, ctx=20, majf=0, minf=61 00:35:44.218 IO depths : 1=0.7%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394098: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10085msec) 00:35:44.218 slat (usec): min=5, max=136, avg=91.63, stdev=32.55 00:35:44.218 clat (msec): min=293, max=671, avg=419.39, stdev=66.39 00:35:44.218 lat (msec): min=293, max=671, avg=419.48, stdev=66.37 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 296], 5.00th=[ 372], 10.00th=[ 372], 20.00th=[ 384], 00:35:44.218 | 30.00th=[ 393], 40.00th=[ 397], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.218 | 70.00th=[ 422], 80.00th=[ 447], 90.00th=[ 451], 95.00th=[ 518], 00:35:44.218 | 99.00th=[ 676], 99.50th=[ 676], 99.90th=[ 676], 99.95th=[ 676], 00:35:44.218 | 99.99th=[ 676] 00:35:44.218 bw ( KiB/s): min= 128, max= 256, per=3.48%, avg=154.95, stdev=51.72, samples=19 00:35:44.218 iops : min= 32, max= 64, avg=38.74, stdev=12.93, samples=19 00:35:44.218 lat (msec) : 500=91.67%, 750=8.33% 00:35:44.218 cpu : usr=98.09%, sys=1.26%, ctx=107, majf=0, minf=48 00:35:44.218 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:44.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.218 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.218 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.218 filename0: (groupid=0, jobs=1): err= 0: pid=1394099: Tue Jul 16 00:19:16 2024 00:35:44.218 read: IOPS=41, BW=165KiB/s (169kB/s)(1664KiB/10107msec) 00:35:44.218 slat (usec): min=6, max=146, avg=22.00, stdev=24.23 00:35:44.218 clat (msec): min=121, max=590, avg=388.50, stdev=97.47 00:35:44.218 lat (msec): min=121, max=591, avg=388.52, stdev=97.47 00:35:44.218 clat percentiles (msec): 00:35:44.218 | 1.00th=[ 123], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 284], 00:35:44.218 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 409], 00:35:44.218 | 70.00th=[ 422], 80.00th=[ 439], 90.00th=[ 502], 95.00th=[ 567], 00:35:44.218 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:35:44.218 | 99.99th=[ 592] 00:35:44.218 bw ( KiB/s): min= 112, max= 256, per=3.80%, avg=168.42, stdev=58.03, samples=19 00:35:44.218 iops : min= 28, max= 64, avg=42.11, stdev=14.51, samples=19 00:35:44.218 lat (msec) : 250=4.81%, 500=84.13%, 750=11.06% 00:35:44.218 cpu : usr=98.32%, sys=1.06%, ctx=55, majf=0, minf=43 00:35:44.219 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename0: (groupid=0, jobs=1): err= 0: pid=1394100: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10152msec) 00:35:44.219 slat (usec): min=4, max=215, avg=35.77, stdev=25.56 00:35:44.219 clat (msec): min=18, max=316, avg=259.58, stdev=54.84 00:35:44.219 lat (msec): min=18, max=316, avg=259.62, stdev=54.85 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 19], 5.00th=[ 108], 10.00th=[ 220], 20.00th=[ 253], 00:35:44.219 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:35:44.219 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 309], 00:35:44.219 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:35:44.219 | 99.99th=[ 317] 00:35:44.219 bw ( KiB/s): min= 128, max= 509, per=5.49%, avg=243.05, stdev=77.95, samples=20 00:35:44.219 iops : min= 32, max= 127, avg=60.75, stdev=19.44, samples=20 00:35:44.219 lat (msec) : 20=2.24%, 50=0.32%, 100=0.32%, 250=16.67%, 500=80.45% 00:35:44.219 cpu : usr=97.84%, sys=1.49%, ctx=100, majf=0, minf=78 00:35:44.219 IO depths : 1=0.5%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394101: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=54, BW=219KiB/s (225kB/s)(2224KiB/10134msec) 00:35:44.219 slat (usec): min=14, max=139, avg=67.37, stdev=31.82 00:35:44.219 clat (msec): min=150, max=501, avg=290.27, stdev=54.78 00:35:44.219 lat (msec): min=151, max=501, avg=290.33, stdev=54.78 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 153], 5.00th=[ 241], 10.00th=[ 253], 20.00th=[ 262], 00:35:44.219 | 30.00th=[ 266], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:35:44.219 | 70.00th=[ 284], 80.00th=[ 313], 90.00th=[ 384], 95.00th=[ 388], 00:35:44.219 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:35:44.219 | 99.99th=[ 502] 00:35:44.219 bw ( KiB/s): min= 127, max= 256, per=4.86%, avg=215.95, stdev=54.16, samples=20 00:35:44.219 iops : min= 31, max= 64, avg=53.95, stdev=13.61, samples=20 00:35:44.219 lat (msec) : 250=7.19%, 500=89.93%, 750=2.88% 00:35:44.219 cpu : usr=98.36%, sys=1.22%, ctx=36, majf=0, minf=57 00:35:44.219 IO depths : 1=1.3%, 2=4.3%, 4=15.3%, 8=67.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=91.3%, 8=3.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394102: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=41, BW=165KiB/s (168kB/s)(1664KiB/10114msec) 00:35:44.219 slat (usec): min=8, max=136, avg=83.47, stdev=28.49 00:35:44.219 clat (msec): min=240, max=579, avg=384.95, stdev=59.54 00:35:44.219 lat (msec): min=240, max=579, avg=385.03, stdev=59.56 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 268], 5.00th=[ 279], 10.00th=[ 296], 20.00th=[ 317], 00:35:44.219 | 30.00th=[ 384], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 409], 00:35:44.219 | 70.00th=[ 418], 80.00th=[ 422], 90.00th=[ 443], 95.00th=[ 451], 00:35:44.219 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:35:44.219 | 99.99th=[ 584] 00:35:44.219 bw ( KiB/s): min= 112, max= 256, per=3.59%, avg=159.95, stdev=53.62, samples=20 00:35:44.219 iops : min= 28, max= 64, avg=39.95, stdev=13.35, samples=20 00:35:44.219 lat (msec) : 250=0.48%, 500=97.12%, 750=2.40% 00:35:44.219 cpu : usr=98.33%, sys=1.26%, ctx=36, majf=0, minf=52 00:35:44.219 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394103: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10132msec) 00:35:44.219 slat (nsec): min=8812, max=60551, avg=15292.41, stdev=6793.79 00:35:44.219 clat (msec): min=183, max=476, avg=280.80, stdev=39.17 00:35:44.219 lat (msec): min=183, max=476, avg=280.82, stdev=39.17 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 222], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 264], 00:35:44.219 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 279], 60.00th=[ 279], 00:35:44.219 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 309], 95.00th=[ 317], 00:35:44.219 | 99.00th=[ 477], 99.50th=[ 477], 99.90th=[ 477], 99.95th=[ 477], 00:35:44.219 | 99.99th=[ 477] 00:35:44.219 bw ( KiB/s): min= 128, max= 256, per=5.04%, avg=224.00, stdev=51.65, samples=20 00:35:44.219 iops : min= 32, max= 64, avg=56.00, stdev=12.91, samples=20 00:35:44.219 lat (msec) : 250=10.42%, 500=89.58% 00:35:44.219 cpu : usr=98.68%, sys=0.95%, ctx=23, majf=0, minf=56 00:35:44.219 IO depths : 1=0.9%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394104: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=55, BW=221KiB/s (226kB/s)(2232KiB/10106msec) 00:35:44.219 slat (usec): min=8, max=108, avg=17.21, stdev=13.87 00:35:44.219 clat (msec): min=119, max=672, avg=289.40, stdev=68.88 00:35:44.219 lat (msec): min=119, max=672, avg=289.41, stdev=68.88 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 121], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 266], 00:35:44.219 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 284], 00:35:44.219 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 430], 00:35:44.219 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 676], 99.95th=[ 676], 00:35:44.219 | 99.99th=[ 676] 00:35:44.219 bw ( KiB/s): min= 128, max= 256, per=5.15%, avg=228.16, stdev=47.34, samples=19 00:35:44.219 iops : min= 32, max= 64, avg=57.00, stdev=11.83, samples=19 00:35:44.219 lat (msec) : 250=10.39%, 500=86.74%, 750=2.87% 00:35:44.219 cpu : usr=98.38%, sys=1.24%, ctx=27, majf=0, minf=49 00:35:44.219 IO depths : 1=1.6%, 2=7.9%, 4=25.1%, 8=54.7%, 16=10.8%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394105: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=37, BW=152KiB/s (156kB/s)(1536KiB/10106msec) 00:35:44.219 slat (nsec): min=6815, max=79627, avg=32127.40, stdev=11236.54 00:35:44.219 clat (msec): min=260, max=828, avg=420.70, stdev=76.64 00:35:44.219 lat (msec): min=260, max=828, avg=420.73, stdev=76.64 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 262], 5.00th=[ 275], 10.00th=[ 376], 20.00th=[ 384], 00:35:44.219 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.219 | 70.00th=[ 422], 80.00th=[ 451], 90.00th=[ 506], 95.00th=[ 523], 00:35:44.219 | 99.00th=[ 667], 99.50th=[ 827], 99.90th=[ 827], 99.95th=[ 827], 00:35:44.219 | 99.99th=[ 827] 00:35:44.219 bw ( KiB/s): min= 128, max= 256, per=3.48%, avg=154.95, stdev=51.72, samples=19 00:35:44.219 iops : min= 32, max= 64, avg=38.74, stdev=12.93, samples=19 00:35:44.219 lat (msec) : 500=86.98%, 750=12.50%, 1000=0.52% 00:35:44.219 cpu : usr=97.54%, sys=1.62%, ctx=102, majf=0, minf=33 00:35:44.219 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394106: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=52, BW=211KiB/s (216kB/s)(2112KiB/10027msec) 00:35:44.219 slat (usec): min=8, max=133, avg=52.80, stdev=39.17 00:35:44.219 clat (msec): min=41, max=523, avg=303.41, stdev=94.09 00:35:44.219 lat (msec): min=41, max=523, avg=303.46, stdev=94.11 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 42], 5.00th=[ 87], 10.00th=[ 220], 20.00th=[ 264], 00:35:44.219 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 309], 00:35:44.219 | 70.00th=[ 380], 80.00th=[ 401], 90.00th=[ 418], 95.00th=[ 426], 00:35:44.219 | 99.00th=[ 430], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 523], 00:35:44.219 | 99.99th=[ 523] 00:35:44.219 bw ( KiB/s): min= 128, max= 513, per=4.61%, avg=204.85, stdev=92.69, samples=20 00:35:44.219 iops : min= 32, max= 128, avg=51.20, stdev=23.13, samples=20 00:35:44.219 lat (msec) : 50=3.03%, 100=3.03%, 250=9.09%, 500=84.09%, 750=0.76% 00:35:44.219 cpu : usr=98.41%, sys=1.20%, ctx=32, majf=0, minf=46 00:35:44.219 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.219 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.219 filename1: (groupid=0, jobs=1): err= 0: pid=1394107: Tue Jul 16 00:19:16 2024 00:35:44.219 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10144msec) 00:35:44.219 slat (usec): min=9, max=151, avg=66.08, stdev=33.89 00:35:44.219 clat (msec): min=38, max=505, avg=288.86, stdev=83.15 00:35:44.219 lat (msec): min=38, max=505, avg=288.92, stdev=83.17 00:35:44.219 clat percentiles (msec): 00:35:44.219 | 1.00th=[ 40], 5.00th=[ 89], 10.00th=[ 239], 20.00th=[ 257], 00:35:44.219 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 284], 00:35:44.219 | 70.00th=[ 288], 80.00th=[ 317], 90.00th=[ 422], 95.00th=[ 447], 00:35:44.219 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 506], 99.95th=[ 506], 00:35:44.219 | 99.99th=[ 506] 00:35:44.219 bw ( KiB/s): min= 128, max= 384, per=4.91%, avg=217.60, stdev=71.82, samples=20 00:35:44.219 iops : min= 32, max= 96, avg=54.40, stdev=17.95, samples=20 00:35:44.219 lat (msec) : 50=2.50%, 100=3.21%, 250=10.71%, 500=83.21%, 750=0.36% 00:35:44.219 cpu : usr=98.28%, sys=1.12%, ctx=64, majf=0, minf=71 00:35:44.219 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:35:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.219 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename1: (groupid=0, jobs=1): err= 0: pid=1394108: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10086msec) 00:35:44.220 slat (usec): min=8, max=153, avg=82.67, stdev=40.55 00:35:44.220 clat (msec): min=241, max=593, avg=419.53, stdev=52.54 00:35:44.220 lat (msec): min=241, max=593, avg=419.61, stdev=52.53 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 309], 5.00th=[ 376], 10.00th=[ 384], 20.00th=[ 384], 00:35:44.220 | 30.00th=[ 393], 40.00th=[ 397], 50.00th=[ 409], 60.00th=[ 414], 00:35:44.220 | 70.00th=[ 422], 80.00th=[ 451], 90.00th=[ 460], 95.00th=[ 550], 00:35:44.220 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:35:44.220 | 99.99th=[ 592] 00:35:44.220 bw ( KiB/s): min= 127, max= 256, per=3.48%, avg=154.89, stdev=53.64, samples=19 00:35:44.220 iops : min= 31, max= 64, avg=38.68, stdev=13.43, samples=19 00:35:44.220 lat (msec) : 250=0.52%, 500=90.10%, 750=9.38% 00:35:44.220 cpu : usr=98.28%, sys=1.12%, ctx=72, majf=0, minf=56 00:35:44.220 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394109: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=55, BW=221KiB/s (227kB/s)(2240KiB/10126msec) 00:35:44.220 slat (usec): min=8, max=189, avg=33.30, stdev=39.06 00:35:44.220 clat (msec): min=40, max=461, avg=289.00, stdev=81.44 00:35:44.220 lat (msec): min=41, max=461, avg=289.03, stdev=81.46 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 42], 5.00th=[ 89], 10.00th=[ 239], 20.00th=[ 253], 00:35:44.220 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 284], 00:35:44.220 | 70.00th=[ 288], 80.00th=[ 317], 90.00th=[ 409], 95.00th=[ 447], 00:35:44.220 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 464], 99.95th=[ 464], 00:35:44.220 | 99.99th=[ 464] 00:35:44.220 bw ( KiB/s): min= 128, max= 383, per=4.91%, avg=217.55, stdev=73.00, samples=20 00:35:44.220 iops : min= 32, max= 95, avg=54.35, stdev=18.16, samples=20 00:35:44.220 lat (msec) : 50=2.50%, 100=2.50%, 250=12.14%, 500=82.86% 00:35:44.220 cpu : usr=98.58%, sys=0.85%, ctx=61, majf=0, minf=56 00:35:44.220 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394110: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=50, BW=203KiB/s (208kB/s)(2056KiB/10113msec) 00:35:44.220 slat (usec): min=8, max=128, avg=25.70, stdev=27.09 00:35:44.220 clat (msec): min=176, max=679, avg=314.35, stdev=82.14 00:35:44.220 lat (msec): min=176, max=679, avg=314.38, stdev=82.15 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 176], 5.00th=[ 253], 10.00th=[ 259], 20.00th=[ 266], 00:35:44.220 | 30.00th=[ 271], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 292], 00:35:44.220 | 70.00th=[ 309], 80.00th=[ 384], 90.00th=[ 430], 95.00th=[ 447], 00:35:44.220 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 676], 99.95th=[ 676], 00:35:44.220 | 99.99th=[ 676] 00:35:44.220 bw ( KiB/s): min= 112, max= 256, per=4.72%, avg=209.68, stdev=56.42, samples=19 00:35:44.220 iops : min= 28, max= 64, avg=52.42, stdev=14.10, samples=19 00:35:44.220 lat (msec) : 250=4.67%, 500=91.83%, 750=3.50% 00:35:44.220 cpu : usr=98.43%, sys=1.15%, ctx=22, majf=0, minf=50 00:35:44.220 IO depths : 1=1.6%, 2=4.7%, 4=15.4%, 8=67.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394111: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10132msec) 00:35:44.220 slat (usec): min=10, max=150, avg=71.85, stdev=36.60 00:35:44.220 clat (msec): min=259, max=578, avg=361.28, stdev=67.34 00:35:44.220 lat (msec): min=259, max=578, avg=361.35, stdev=67.36 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 259], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 279], 00:35:44.220 | 30.00th=[ 296], 40.00th=[ 317], 50.00th=[ 384], 60.00th=[ 401], 00:35:44.220 | 70.00th=[ 409], 80.00th=[ 418], 90.00th=[ 430], 95.00th=[ 439], 00:35:44.220 | 99.00th=[ 481], 99.50th=[ 510], 99.90th=[ 575], 99.95th=[ 575], 00:35:44.220 | 99.99th=[ 575] 00:35:44.220 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=172.80, stdev=59.55, samples=20 00:35:44.220 iops : min= 32, max= 64, avg=43.20, stdev=14.89, samples=20 00:35:44.220 lat (msec) : 500=99.11%, 750=0.89% 00:35:44.220 cpu : usr=98.31%, sys=1.23%, ctx=24, majf=0, minf=42 00:35:44.220 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394112: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10104msec) 00:35:44.220 slat (usec): min=5, max=157, avg=86.18, stdev=29.08 00:35:44.220 clat (msec): min=119, max=828, avg=420.33, stdev=87.43 00:35:44.220 lat (msec): min=119, max=828, avg=420.41, stdev=87.42 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 259], 5.00th=[ 279], 10.00th=[ 363], 20.00th=[ 380], 00:35:44.220 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.220 | 70.00th=[ 430], 80.00th=[ 451], 90.00th=[ 506], 95.00th=[ 584], 00:35:44.220 | 99.00th=[ 667], 99.50th=[ 827], 99.90th=[ 827], 99.95th=[ 827], 00:35:44.220 | 99.99th=[ 827] 00:35:44.220 bw ( KiB/s): min= 112, max= 256, per=3.48%, avg=154.95, stdev=52.27, samples=19 00:35:44.220 iops : min= 28, max= 64, avg=38.74, stdev=13.07, samples=19 00:35:44.220 lat (msec) : 250=0.52%, 500=82.29%, 750=16.67%, 1000=0.52% 00:35:44.220 cpu : usr=98.33%, sys=1.06%, ctx=84, majf=0, minf=28 00:35:44.220 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394113: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10104msec) 00:35:44.220 slat (usec): min=7, max=131, avg=35.13, stdev=17.47 00:35:44.220 clat (msec): min=119, max=827, avg=420.67, stdev=81.41 00:35:44.220 lat (msec): min=119, max=827, avg=420.71, stdev=81.40 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 262], 5.00th=[ 271], 10.00th=[ 363], 20.00th=[ 384], 00:35:44.220 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 418], 60.00th=[ 418], 00:35:44.220 | 70.00th=[ 430], 80.00th=[ 451], 90.00th=[ 506], 95.00th=[ 542], 00:35:44.220 | 99.00th=[ 667], 99.50th=[ 827], 99.90th=[ 827], 99.95th=[ 827], 00:35:44.220 | 99.99th=[ 827] 00:35:44.220 bw ( KiB/s): min= 128, max= 256, per=3.48%, avg=154.95, stdev=51.72, samples=19 00:35:44.220 iops : min= 32, max= 64, avg=38.74, stdev=12.93, samples=19 00:35:44.220 lat (msec) : 250=0.52%, 500=84.90%, 750=14.06%, 1000=0.52% 00:35:44.220 cpu : usr=98.40%, sys=1.11%, ctx=18, majf=0, minf=53 00:35:44.220 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.220 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.220 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.220 filename2: (groupid=0, jobs=1): err= 0: pid=1394114: Tue Jul 16 00:19:16 2024 00:35:44.220 read: IOPS=38, BW=153KiB/s (157kB/s)(1536KiB/10012msec) 00:35:44.220 slat (usec): min=21, max=138, avg=92.27, stdev=18.28 00:35:44.220 clat (msec): min=374, max=501, avg=416.33, stdev=33.57 00:35:44.220 lat (msec): min=374, max=501, avg=416.42, stdev=33.57 00:35:44.220 clat percentiles (msec): 00:35:44.220 | 1.00th=[ 376], 5.00th=[ 380], 10.00th=[ 384], 20.00th=[ 388], 00:35:44.220 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.220 | 70.00th=[ 422], 80.00th=[ 439], 90.00th=[ 477], 95.00th=[ 493], 00:35:44.220 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:35:44.220 | 99.99th=[ 502] 00:35:44.220 bw ( KiB/s): min= 128, max= 256, per=3.32%, avg=147.20, stdev=46.89, samples=20 00:35:44.220 iops : min= 32, max= 64, avg=36.80, stdev=11.72, samples=20 00:35:44.220 lat (msec) : 500=96.61%, 750=3.39% 00:35:44.220 cpu : usr=98.46%, sys=1.14%, ctx=17, majf=0, minf=51 00:35:44.220 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.221 filename2: (groupid=0, jobs=1): err= 0: pid=1394115: Tue Jul 16 00:19:16 2024 00:35:44.221 read: IOPS=39, BW=160KiB/s (164kB/s)(1600KiB/10015msec) 00:35:44.221 slat (usec): min=6, max=147, avg=84.70, stdev=32.35 00:35:44.221 clat (msec): min=241, max=564, avg=399.87, stdev=58.14 00:35:44.221 lat (msec): min=241, max=565, avg=399.96, stdev=58.15 00:35:44.221 clat percentiles (msec): 00:35:44.221 | 1.00th=[ 257], 5.00th=[ 262], 10.00th=[ 279], 20.00th=[ 384], 00:35:44.221 | 30.00th=[ 388], 40.00th=[ 393], 50.00th=[ 401], 60.00th=[ 409], 00:35:44.221 | 70.00th=[ 418], 80.00th=[ 430], 90.00th=[ 451], 95.00th=[ 518], 00:35:44.221 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:35:44.221 | 99.99th=[ 567] 00:35:44.221 bw ( KiB/s): min= 127, max= 256, per=3.46%, avg=153.55, stdev=48.84, samples=20 00:35:44.221 iops : min= 31, max= 64, avg=38.35, stdev=12.23, samples=20 00:35:44.221 lat (msec) : 250=0.50%, 500=93.50%, 750=6.00% 00:35:44.221 cpu : usr=98.45%, sys=1.03%, ctx=104, majf=0, minf=45 00:35:44.221 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:44.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.221 filename2: (groupid=0, jobs=1): err= 0: pid=1394116: Tue Jul 16 00:19:16 2024 00:35:44.221 read: IOPS=38, BW=152KiB/s (156kB/s)(1536KiB/10105msec) 00:35:44.221 slat (usec): min=9, max=153, avg=38.05, stdev=23.87 00:35:44.221 clat (msec): min=260, max=670, avg=420.69, stdev=76.65 00:35:44.221 lat (msec): min=260, max=670, avg=420.72, stdev=76.64 00:35:44.221 clat percentiles (msec): 00:35:44.221 | 1.00th=[ 262], 5.00th=[ 279], 10.00th=[ 363], 20.00th=[ 384], 00:35:44.221 | 30.00th=[ 393], 40.00th=[ 401], 50.00th=[ 409], 60.00th=[ 418], 00:35:44.221 | 70.00th=[ 430], 80.00th=[ 451], 90.00th=[ 506], 95.00th=[ 542], 00:35:44.221 | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], 00:35:44.221 | 99.99th=[ 667] 00:35:44.221 bw ( KiB/s): min= 112, max= 256, per=3.48%, avg=154.95, stdev=50.05, samples=19 00:35:44.221 iops : min= 28, max= 64, avg=38.74, stdev=12.51, samples=19 00:35:44.221 lat (msec) : 500=84.38%, 750=15.62% 00:35:44.221 cpu : usr=98.08%, sys=1.33%, ctx=48, majf=0, minf=49 00:35:44.221 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:44.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.221 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.221 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:44.221 00:35:44.221 Run status group 0 (all jobs): 00:35:44.221 READ: bw=4424KiB/s (4530kB/s), 152KiB/s-246KiB/s (156kB/s-252kB/s), io=43.9MiB (46.0MB), run=10012-10152msec 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 bdev_null0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 [2024-07-16 00:19:17.010998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 bdev_null1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:44.221 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:44.222 { 00:35:44.222 "params": { 00:35:44.222 "name": "Nvme$subsystem", 00:35:44.222 "trtype": "$TEST_TRANSPORT", 00:35:44.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.222 "adrfam": "ipv4", 00:35:44.222 "trsvcid": "$NVMF_PORT", 00:35:44.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.222 "hdgst": ${hdgst:-false}, 00:35:44.222 "ddgst": ${ddgst:-false} 00:35:44.222 }, 00:35:44.222 "method": "bdev_nvme_attach_controller" 00:35:44.222 } 00:35:44.222 EOF 00:35:44.222 )") 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:44.222 { 00:35:44.222 "params": { 00:35:44.222 "name": "Nvme$subsystem", 00:35:44.222 "trtype": "$TEST_TRANSPORT", 00:35:44.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.222 "adrfam": "ipv4", 00:35:44.222 "trsvcid": "$NVMF_PORT", 00:35:44.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.222 "hdgst": ${hdgst:-false}, 00:35:44.222 "ddgst": ${ddgst:-false} 00:35:44.222 }, 00:35:44.222 "method": "bdev_nvme_attach_controller" 00:35:44.222 } 00:35:44.222 EOF 00:35:44.222 )") 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:44.222 "params": { 00:35:44.222 "name": "Nvme0", 00:35:44.222 "trtype": "tcp", 00:35:44.222 "traddr": "10.0.0.2", 00:35:44.222 "adrfam": "ipv4", 00:35:44.222 "trsvcid": "4420", 00:35:44.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.222 "hdgst": false, 00:35:44.222 "ddgst": false 00:35:44.222 }, 00:35:44.222 "method": "bdev_nvme_attach_controller" 00:35:44.222 },{ 00:35:44.222 "params": { 00:35:44.222 "name": "Nvme1", 00:35:44.222 "trtype": "tcp", 00:35:44.222 "traddr": "10.0.0.2", 00:35:44.222 "adrfam": "ipv4", 00:35:44.222 "trsvcid": "4420", 00:35:44.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:44.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:44.222 "hdgst": false, 00:35:44.222 "ddgst": false 00:35:44.222 }, 00:35:44.222 "method": "bdev_nvme_attach_controller" 00:35:44.222 }' 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:44.222 00:19:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.222 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:44.222 ... 00:35:44.222 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:44.222 ... 00:35:44.222 fio-3.35 00:35:44.222 Starting 4 threads 00:35:44.222 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.482 00:35:49.482 filename0: (groupid=0, jobs=1): err= 0: pid=1395109: Tue Jul 16 00:19:22 2024 00:35:49.482 read: IOPS=1642, BW=12.8MiB/s (13.5MB/s)(64.7MiB/5045msec) 00:35:49.482 slat (nsec): min=6988, max=72886, avg=17932.32, stdev=8516.73 00:35:49.482 clat (usec): min=2214, max=46055, avg=4810.52, stdev=1246.76 00:35:49.482 lat (usec): min=2237, max=46075, avg=4828.46, stdev=1246.74 00:35:49.482 clat percentiles (usec): 00:35:49.482 | 1.00th=[ 3458], 5.00th=[ 4178], 10.00th=[ 4424], 20.00th=[ 4621], 00:35:49.482 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4817], 60.00th=[ 4817], 00:35:49.482 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5276], 00:35:49.482 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 8291], 99.95th=[45876], 00:35:49.482 | 99.99th=[45876] 00:35:49.482 bw ( KiB/s): min=12976, max=13584, per=25.58%, avg=13243.20, stdev=175.12, samples=10 00:35:49.482 iops : min= 1622, max= 1698, avg=1655.40, stdev=21.89, samples=10 00:35:49.482 lat (msec) : 4=2.32%, 10=97.60%, 50=0.08% 00:35:49.482 cpu : usr=95.32%, sys=3.97%, ctx=85, majf=0, minf=0 00:35:49.482 IO depths : 1=0.6%, 2=11.1%, 4=60.1%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 issued rwts: total=8284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.482 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.482 filename0: (groupid=0, jobs=1): err= 0: pid=1395110: Tue Jul 16 00:19:22 2024 00:35:49.482 read: IOPS=1626, BW=12.7MiB/s (13.3MB/s)(63.6MiB/5002msec) 00:35:49.482 slat (nsec): min=7231, max=72369, avg=18051.49, stdev=10019.45 00:35:49.482 clat (usec): min=1085, max=9141, avg=4846.85, stdev=682.93 00:35:49.482 lat (usec): min=1097, max=9158, avg=4864.90, stdev=683.25 00:35:49.482 clat percentiles (usec): 00:35:49.482 | 1.00th=[ 2147], 5.00th=[ 4146], 10.00th=[ 4555], 20.00th=[ 4752], 00:35:49.482 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4817], 60.00th=[ 4817], 00:35:49.482 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 5080], 95.00th=[ 5473], 00:35:49.482 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[ 8717], 99.95th=[ 8717], 00:35:49.482 | 99.99th=[ 9110] 00:35:49.482 bw ( KiB/s): min=12720, max=13168, per=25.14%, avg=13014.40, stdev=116.90, samples=10 00:35:49.482 iops : min= 1590, max= 1646, avg=1626.80, stdev=14.61, samples=10 00:35:49.482 lat (msec) : 2=0.92%, 4=1.92%, 10=97.16% 00:35:49.482 cpu : usr=96.02%, sys=3.58%, ctx=7, majf=0, minf=9 00:35:49.482 IO depths : 1=0.8%, 2=21.7%, 4=52.5%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 issued rwts: total=8135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.482 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.482 filename1: (groupid=0, jobs=1): err= 0: pid=1395111: Tue Jul 16 00:19:22 2024 00:35:49.482 read: IOPS=1639, BW=12.8MiB/s (13.4MB/s)(64.1MiB/5003msec) 00:35:49.482 slat (nsec): min=7095, max=72376, avg=18146.23, stdev=9847.71 00:35:49.482 clat (usec): min=1078, max=8920, avg=4806.59, stdev=414.48 00:35:49.482 lat (usec): min=1090, max=8947, avg=4824.74, stdev=414.99 00:35:49.482 clat percentiles (usec): 00:35:49.482 | 1.00th=[ 3687], 5.00th=[ 4178], 10.00th=[ 4555], 20.00th=[ 4752], 00:35:49.482 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4817], 60.00th=[ 4817], 00:35:49.482 | 70.00th=[ 4883], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5211], 00:35:49.482 | 99.00th=[ 6259], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8160], 00:35:49.482 | 99.99th=[ 8979] 00:35:49.482 bw ( KiB/s): min=12928, max=13296, per=25.33%, avg=13114.60, stdev=123.42, samples=10 00:35:49.482 iops : min= 1616, max= 1662, avg=1639.30, stdev=15.43, samples=10 00:35:49.482 lat (msec) : 2=0.20%, 4=1.67%, 10=98.13% 00:35:49.482 cpu : usr=96.38%, sys=3.22%, ctx=7, majf=0, minf=0 00:35:49.482 IO depths : 1=0.6%, 2=23.3%, 4=51.1%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 issued rwts: total=8203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.482 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.482 filename1: (groupid=0, jobs=1): err= 0: pid=1395112: Tue Jul 16 00:19:22 2024 00:35:49.482 read: IOPS=1604, BW=12.5MiB/s (13.1MB/s)(62.7MiB/5003msec) 00:35:49.482 slat (nsec): min=7168, max=74540, avg=17248.67, stdev=9851.98 00:35:49.482 clat (usec): min=1026, max=9420, avg=4918.47, stdev=804.01 00:35:49.482 lat (usec): min=1039, max=9439, avg=4935.72, stdev=804.01 00:35:49.482 clat percentiles (usec): 00:35:49.482 | 1.00th=[ 1795], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 4752], 00:35:49.482 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4817], 60.00th=[ 4817], 00:35:49.482 | 70.00th=[ 4883], 80.00th=[ 4948], 90.00th=[ 5276], 95.00th=[ 6063], 00:35:49.482 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8717], 99.95th=[ 8979], 00:35:49.482 | 99.99th=[ 9372] 00:35:49.482 bw ( KiB/s): min=12439, max=13136, per=24.78%, avg=12831.00, stdev=219.62, samples=9 00:35:49.482 iops : min= 1554, max= 1642, avg=1603.78, stdev=27.65, samples=9 00:35:49.482 lat (msec) : 2=1.23%, 4=2.01%, 10=96.76% 00:35:49.482 cpu : usr=96.16%, sys=3.40%, ctx=18, majf=0, minf=9 00:35:49.482 IO depths : 1=0.1%, 2=19.5%, 4=53.9%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.482 issued rwts: total=8028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.482 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:49.482 00:35:49.482 Run status group 0 (all jobs): 00:35:49.482 READ: bw=50.6MiB/s (53.0MB/s), 12.5MiB/s-12.8MiB/s (13.1MB/s-13.5MB/s), io=255MiB (267MB), run=5002-5045msec 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:35:49.482 real 0m23.653s 00:35:49.482 user 4m35.794s 00:35:49.482 sys 0m4.987s 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 ************************************ 00:35:49.482 END TEST fio_dif_rand_params 00:35:49.482 ************************************ 00:35:49.482 00:19:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:49.482 00:19:23 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:49.482 00:19:23 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 ************************************ 00:35:49.482 START TEST fio_dif_digest 00:35:49.482 ************************************ 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 bdev_null0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:49.482 [2024-07-16 00:19:23.230076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:49.482 { 00:35:49.482 "params": { 00:35:49.482 "name": "Nvme$subsystem", 00:35:49.482 "trtype": "$TEST_TRANSPORT", 00:35:49.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.482 "adrfam": "ipv4", 00:35:49.482 "trsvcid": "$NVMF_PORT", 00:35:49.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.482 "hdgst": ${hdgst:-false}, 00:35:49.482 "ddgst": ${ddgst:-false} 00:35:49.482 }, 00:35:49.482 "method": "bdev_nvme_attach_controller" 00:35:49.482 } 00:35:49.482 EOF 00:35:49.482 )") 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.482 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:49.483 "params": { 00:35:49.483 "name": "Nvme0", 00:35:49.483 "trtype": "tcp", 00:35:49.483 "traddr": "10.0.0.2", 00:35:49.483 "adrfam": "ipv4", 00:35:49.483 "trsvcid": "4420", 00:35:49.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.483 "hdgst": true, 00:35:49.483 "ddgst": true 00:35:49.483 }, 00:35:49.483 "method": "bdev_nvme_attach_controller" 00:35:49.483 }' 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.483 00:19:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.483 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:49.483 ... 00:35:49.483 fio-3.35 00:35:49.483 Starting 3 threads 00:35:49.483 EAL: No free 2048 kB hugepages reported on node 1 00:36:01.666 00:36:01.666 filename0: (groupid=0, jobs=1): err= 0: pid=1395658: Tue Jul 16 00:19:33 2024 00:36:01.666 read: IOPS=176, BW=22.0MiB/s (23.1MB/s)(221MiB/10046msec) 00:36:01.666 slat (nsec): min=7960, max=54359, avg=15090.08, stdev=3624.21 00:36:01.666 clat (usec): min=12998, max=55897, avg=16985.16, stdev=2155.52 00:36:01.666 lat (usec): min=13016, max=55917, avg=17000.25, stdev=2155.82 00:36:01.666 clat percentiles (usec): 00:36:01.666 | 1.00th=[14353], 5.00th=[15270], 10.00th=[15533], 20.00th=[16057], 00:36:01.666 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:36:01.666 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[18744], 00:36:01.666 | 99.00th=[19792], 99.50th=[20317], 99.90th=[51643], 99.95th=[55837], 00:36:01.666 | 99.99th=[55837] 00:36:01.666 bw ( KiB/s): min=20992, max=23296, per=32.06%, avg=22617.60, stdev=493.30, samples=20 00:36:01.666 iops : min= 164, max= 182, avg=176.70, stdev= 3.85, samples=20 00:36:01.666 lat (msec) : 20=99.27%, 50=0.45%, 100=0.28% 00:36:01.666 cpu : usr=95.33%, sys=4.27%, ctx=25, majf=0, minf=126 00:36:01.666 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 issued rwts: total=1770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.666 filename0: (groupid=0, jobs=1): err= 0: pid=1395659: Tue Jul 16 00:19:33 2024 00:36:01.666 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(249MiB/10009msec) 00:36:01.666 slat (nsec): min=10520, max=84125, avg=21437.69, stdev=5349.72 00:36:01.666 clat (usec): min=9509, max=60366, avg=15038.84, stdev=1988.46 00:36:01.666 lat (usec): min=9530, max=60410, avg=15060.28, stdev=1988.39 00:36:01.666 clat percentiles (usec): 00:36:01.666 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:36:01.666 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:36:01.666 | 70.00th=[15533], 80.00th=[15664], 90.00th=[16057], 95.00th=[16450], 00:36:01.666 | 99.00th=[16909], 99.50th=[17433], 99.90th=[60556], 99.95th=[60556], 00:36:01.666 | 99.99th=[60556] 00:36:01.666 bw ( KiB/s): min=23296, max=26368, per=36.10%, avg=25472.00, stdev=651.35, samples=20 00:36:01.666 iops : min= 182, max= 206, avg=199.00, stdev= 5.09, samples=20 00:36:01.666 lat (msec) : 10=0.05%, 20=99.80%, 100=0.15% 00:36:01.666 cpu : usr=95.49%, sys=4.06%, ctx=16, majf=0, minf=188 00:36:01.666 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.666 filename0: (groupid=0, jobs=1): err= 0: pid=1395660: Tue Jul 16 00:19:33 2024 00:36:01.666 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(222MiB/10047msec) 00:36:01.666 slat (nsec): min=7941, max=36355, avg=14609.15, stdev=3553.39 00:36:01.666 clat (usec): min=4672, max=53595, avg=16939.06, stdev=1890.29 00:36:01.666 lat (usec): min=4696, max=53608, avg=16953.67, stdev=1890.05 00:36:01.666 clat percentiles (usec): 00:36:01.666 | 1.00th=[12256], 5.00th=[15139], 10.00th=[15533], 20.00th=[16057], 00:36:01.666 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:36:01.666 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:36:01.666 | 99.00th=[19792], 99.50th=[20055], 99.90th=[51119], 99.95th=[53740], 00:36:01.666 | 99.99th=[53740] 00:36:01.666 bw ( KiB/s): min=21760, max=24576, per=32.15%, avg=22681.60, stdev=612.60, samples=20 00:36:01.666 iops : min= 170, max= 192, avg=177.20, stdev= 4.79, samples=20 00:36:01.666 lat (msec) : 10=0.79%, 20=98.59%, 50=0.51%, 100=0.11% 00:36:01.666 cpu : usr=95.22%, sys=4.37%, ctx=24, majf=0, minf=139 00:36:01.666 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:01.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.666 issued rwts: total=1775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:01.666 00:36:01.666 Run status group 0 (all jobs): 00:36:01.666 READ: bw=68.9MiB/s (72.2MB/s), 22.0MiB/s-24.9MiB/s (23.1MB/s-26.1MB/s), io=692MiB (726MB), run=10009-10047msec 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.666 00:36:01.666 real 0m10.941s 00:36:01.666 user 0m29.496s 00:36:01.666 sys 0m1.518s 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:01.666 00:19:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:01.666 ************************************ 00:36:01.666 END TEST fio_dif_digest 00:36:01.666 ************************************ 00:36:01.666 00:19:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:01.667 00:19:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:01.667 rmmod nvme_tcp 00:36:01.667 rmmod nvme_fabrics 00:36:01.667 rmmod nvme_keyring 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1391273 ']' 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1391273 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1391273 ']' 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1391273 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1391273 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1391273' 00:36:01.667 killing process with pid 1391273 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1391273 00:36:01.667 00:19:34 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1391273 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:01.667 00:19:34 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:01.667 Waiting for block devices as requested 00:36:01.667 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:36:01.667 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:01.667 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:01.667 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:01.667 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:01.667 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:01.667 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:01.927 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:01.927 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:01.927 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:01.927 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:01.927 00:19:36 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:01.927 00:19:36 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:01.927 00:19:36 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:01.927 00:19:36 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:01.927 00:19:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.927 00:19:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:01.927 00:19:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.487 00:19:38 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:04.487 00:36:04.487 real 1m4.323s 00:36:04.487 user 6m30.440s 00:36:04.487 sys 0m14.799s 00:36:04.487 00:19:38 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:04.487 00:19:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.487 ************************************ 00:36:04.487 END TEST nvmf_dif 00:36:04.487 ************************************ 00:36:04.487 00:19:38 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:04.487 00:19:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:04.487 00:19:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:04.487 00:19:38 -- common/autotest_common.sh@10 -- # set +x 00:36:04.487 ************************************ 00:36:04.487 START TEST nvmf_abort_qd_sizes 00:36:04.487 ************************************ 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:04.487 * Looking for test storage... 00:36:04.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.487 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:04.488 00:19:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.865 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:36:05.866 Found 0000:08:00.0 (0x8086 - 0x159b) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:36:05.866 Found 0000:08:00.1 (0x8086 - 0x159b) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:36:05.866 Found net devices under 0000:08:00.0: cvl_0_0 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:36:05.866 Found net devices under 0000:08:00.1: cvl_0_1 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:05.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:36:05.866 00:36:05.866 --- 10.0.0.2 ping statistics --- 00:36:05.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.866 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:36:05.866 00:36:05.866 --- 10.0.0.1 ping statistics --- 00:36:05.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.866 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:05.866 00:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:06.800 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:06.800 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:06.800 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:07.735 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:07.735 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1399298 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1399298 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1399298 ']' 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:07.992 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:07.992 [2024-07-16 00:19:42.319819] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:07.992 [2024-07-16 00:19:42.319911] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.992 EAL: No free 2048 kB hugepages reported on node 1 00:36:07.992 [2024-07-16 00:19:42.384624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:07.992 [2024-07-16 00:19:42.473557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.992 [2024-07-16 00:19:42.473613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.992 [2024-07-16 00:19:42.473630] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.992 [2024-07-16 00:19:42.473644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.992 [2024-07-16 00:19:42.473656] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.992 [2024-07-16 00:19:42.473742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:07.992 [2024-07-16 00:19:42.473773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:07.992 [2024-07-16 00:19:42.473818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:07.992 [2024-07-16 00:19:42.473821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:08.249 00:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:08.249 ************************************ 00:36:08.249 START TEST spdk_target_abort 00:36:08.249 ************************************ 00:36:08.249 00:19:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:08.249 00:19:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:08.249 00:19:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:36:08.249 00:19:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.249 00:19:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.523 spdk_targetn1 00:36:11.523 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.523 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:11.523 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.523 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.523 [2024-07-16 00:19:45.443801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.524 [2024-07-16 00:19:45.476046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.524 00:19:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.524 EAL: No free 2048 kB hugepages reported on node 1 00:36:14.795 Initializing NVMe Controllers 00:36:14.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.795 Initialization complete. Launching workers. 00:36:14.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10311, failed: 0 00:36:14.796 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1183, failed to submit 9128 00:36:14.796 success 690, unsuccess 493, failed 0 00:36:14.796 00:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:14.796 00:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.796 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.069 Initializing NVMe Controllers 00:36:18.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:18.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:18.069 Initialization complete. Launching workers. 00:36:18.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8564, failed: 0 00:36:18.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7335 00:36:18.069 success 342, unsuccess 887, failed 0 00:36:18.069 00:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.069 00:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.069 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.592 Initializing NVMe Controllers 00:36:20.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:20.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:20.592 Initialization complete. Launching workers. 00:36:20.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29784, failed: 0 00:36:20.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2601, failed to submit 27183 00:36:20.592 success 434, unsuccess 2167, failed 0 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.592 00:19:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1399298 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1399298 ']' 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1399298 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1399298 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1399298' 00:36:21.966 killing process with pid 1399298 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1399298 00:36:21.966 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1399298 00:36:22.225 00:36:22.225 real 0m13.912s 00:36:22.225 user 0m52.531s 00:36:22.225 sys 0m2.409s 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:22.225 ************************************ 00:36:22.225 END TEST spdk_target_abort 00:36:22.225 ************************************ 00:36:22.225 00:19:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:22.225 00:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:22.225 00:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:22.225 00:19:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:22.225 ************************************ 00:36:22.225 START TEST kernel_target_abort 00:36:22.225 ************************************ 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:22.225 00:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:23.162 Waiting for block devices as requested 00:36:23.162 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:36:23.162 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:23.421 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:23.421 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:23.421 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:23.421 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:23.679 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:23.679 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:23.679 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:23.679 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:23.679 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:23.937 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:23.937 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:23.937 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:23.937 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:24.194 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:24.194 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:24.194 No valid GPT data, bailing 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:24.194 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:36:24.451 00:36:24.451 Discovery Log Number of Records 2, Generation counter 2 00:36:24.451 =====Discovery Log Entry 0====== 00:36:24.451 trtype: tcp 00:36:24.451 adrfam: ipv4 00:36:24.451 subtype: current discovery subsystem 00:36:24.451 treq: not specified, sq flow control disable supported 00:36:24.451 portid: 1 00:36:24.451 trsvcid: 4420 00:36:24.451 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:24.451 traddr: 10.0.0.1 00:36:24.451 eflags: none 00:36:24.451 sectype: none 00:36:24.451 =====Discovery Log Entry 1====== 00:36:24.451 trtype: tcp 00:36:24.451 adrfam: ipv4 00:36:24.451 subtype: nvme subsystem 00:36:24.451 treq: not specified, sq flow control disable supported 00:36:24.451 portid: 1 00:36:24.451 trsvcid: 4420 00:36:24.451 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:24.451 traddr: 10.0.0.1 00:36:24.451 eflags: none 00:36:24.451 sectype: none 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.451 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.452 00:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.452 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.742 Initializing NVMe Controllers 00:36:27.742 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.742 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.742 Initialization complete. Launching workers. 00:36:27.742 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45393, failed: 0 00:36:27.742 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45393, failed to submit 0 00:36:27.742 success 0, unsuccess 45393, failed 0 00:36:27.742 00:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.742 00:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.742 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.058 Initializing NVMe Controllers 00:36:31.058 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.058 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.058 Initialization complete. Launching workers. 00:36:31.058 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79587, failed: 0 00:36:31.058 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20062, failed to submit 59525 00:36:31.058 success 0, unsuccess 20062, failed 0 00:36:31.058 00:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.058 00:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.058 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.579 Initializing NVMe Controllers 00:36:33.579 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:33.579 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:33.579 Initialization complete. Launching workers. 00:36:33.579 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76904, failed: 0 00:36:33.579 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19214, failed to submit 57690 00:36:33.579 success 0, unsuccess 19214, failed 0 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:33.579 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:33.837 00:20:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:34.773 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:34.773 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:36:34.773 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:36:35.032 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:36:35.973 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:36:35.973 00:36:35.973 real 0m13.662s 00:36:35.973 user 0m6.680s 00:36:35.973 sys 0m2.777s 00:36:35.973 00:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:35.973 00:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.973 ************************************ 00:36:35.973 END TEST kernel_target_abort 00:36:35.973 ************************************ 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:35.973 rmmod nvme_tcp 00:36:35.973 rmmod nvme_fabrics 00:36:35.973 rmmod nvme_keyring 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1399298 ']' 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1399298 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1399298 ']' 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1399298 00:36:35.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1399298) - No such process 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1399298 is not found' 00:36:35.973 Process with pid 1399298 is not found 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:35.973 00:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:36.910 Waiting for block devices as requested 00:36:36.910 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:36:36.910 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:36.910 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:37.205 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:37.205 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:37.205 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:37.205 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:37.465 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:37.465 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:37.465 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:36:37.465 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:36:37.465 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:36:37.725 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:36:37.725 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:36:37.725 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:36:37.984 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:36:37.984 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:37.984 00:20:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.526 00:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:40.526 00:36:40.526 real 0m35.941s 00:36:40.526 user 1m0.986s 00:36:40.526 sys 0m8.019s 00:36:40.526 00:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:40.526 00:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:40.526 ************************************ 00:36:40.526 END TEST nvmf_abort_qd_sizes 00:36:40.526 ************************************ 00:36:40.526 00:20:14 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:40.526 00:20:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:40.526 00:20:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:40.526 00:20:14 -- common/autotest_common.sh@10 -- # set +x 00:36:40.526 ************************************ 00:36:40.526 START TEST keyring_file 00:36:40.526 ************************************ 00:36:40.526 00:20:14 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:40.526 * Looking for test storage... 00:36:40.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:40.526 00:20:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:40.526 00:20:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.526 00:20:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.526 00:20:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.526 00:20:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.526 00:20:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.526 00:20:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.527 00:20:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.527 00:20:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.527 00:20:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:40.527 00:20:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DqAqMjwyo4 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DqAqMjwyo4 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DqAqMjwyo4 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DqAqMjwyo4 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cCAHAO9lt8 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:40.527 00:20:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cCAHAO9lt8 00:36:40.527 00:20:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cCAHAO9lt8 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.cCAHAO9lt8 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=1403579 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:40.527 00:20:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1403579 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1403579 ']' 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:40.527 00:20:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.527 [2024-07-16 00:20:14.679479] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:40.527 [2024-07-16 00:20:14.679589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403579 ] 00:36:40.527 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.527 [2024-07-16 00:20:14.739458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.527 [2024-07-16 00:20:14.829270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:40.786 00:20:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.786 [2024-07-16 00:20:15.048344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.786 null0 00:36:40.786 [2024-07-16 00:20:15.080394] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:40.786 [2024-07-16 00:20:15.080750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:40.786 [2024-07-16 00:20:15.088412] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.786 00:20:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.786 [2024-07-16 00:20:15.100436] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:40.786 request: 00:36:40.786 { 00:36:40.786 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.786 "secure_channel": false, 00:36:40.786 "listen_address": { 00:36:40.786 "trtype": "tcp", 00:36:40.786 "traddr": "127.0.0.1", 00:36:40.786 "trsvcid": "4420" 00:36:40.786 }, 00:36:40.786 "method": "nvmf_subsystem_add_listener", 00:36:40.786 "req_id": 1 00:36:40.786 } 00:36:40.786 Got JSON-RPC error response 00:36:40.786 response: 00:36:40.786 { 00:36:40.786 "code": -32602, 00:36:40.786 "message": "Invalid parameters" 00:36:40.786 } 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.786 00:20:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=1403659 00:36:40.786 00:20:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:40.786 00:20:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1403659 /var/tmp/bperf.sock 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1403659 ']' 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:40.786 00:20:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.786 [2024-07-16 00:20:15.150689] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:40.786 [2024-07-16 00:20:15.150795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1403659 ] 00:36:40.786 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.786 [2024-07-16 00:20:15.209250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.786 [2024-07-16 00:20:15.296626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.044 00:20:15 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:41.044 00:20:15 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:41.044 00:20:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:41.044 00:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:41.303 00:20:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cCAHAO9lt8 00:36:41.303 00:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cCAHAO9lt8 00:36:41.561 00:20:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:41.561 00:20:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:41.561 00:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.561 00:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.561 00:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.819 00:20:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DqAqMjwyo4 == \/\t\m\p\/\t\m\p\.\D\q\A\q\M\j\w\y\o\4 ]] 00:36:41.819 00:20:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:41.819 00:20:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:41.819 00:20:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.819 00:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.819 00:20:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.077 00:20:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cCAHAO9lt8 == \/\t\m\p\/\t\m\p\.\c\C\A\H\A\O\9\l\t\8 ]] 00:36:42.077 00:20:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:42.077 00:20:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.077 00:20:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.077 00:20:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.077 00:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.077 00:20:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.335 00:20:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:42.335 00:20:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:42.335 00:20:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:42.335 00:20:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.335 00:20:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.335 00:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.335 00:20:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.592 00:20:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:42.592 00:20:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.592 00:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:42.592 [2024-07-16 00:20:17.082930] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:42.850 nvme0n1 00:36:42.850 00:20:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:42.851 00:20:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.851 00:20:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.851 00:20:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.851 00:20:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.851 00:20:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.109 00:20:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:43.109 00:20:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:43.109 00:20:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.109 00:20:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.109 00:20:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.109 00:20:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.109 00:20:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.367 00:20:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:43.367 00:20:17 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:43.367 Running I/O for 1 seconds... 00:36:44.301 00:36:44.301 Latency(us) 00:36:44.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.301 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:44.301 nvme0n1 : 1.01 8570.09 33.48 0.00 0.00 14861.35 8204.14 25049.32 00:36:44.301 =================================================================================================================== 00:36:44.301 Total : 8570.09 33.48 0.00 0.00 14861.35 8204.14 25049.32 00:36:44.301 0 00:36:44.301 00:20:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:44.301 00:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:44.559 00:20:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:44.559 00:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:44.559 00:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.559 00:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.559 00:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.559 00:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:44.817 00:20:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:44.817 00:20:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:44.817 00:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:44.817 00:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:44.817 00:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:44.817 00:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.817 00:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.075 00:20:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:45.075 00:20:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:45.075 00:20:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.075 00:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:45.333 [2024-07-16 00:20:19.779025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:45.333 [2024-07-16 00:20:19.779450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4f190 (107): Transport endpoint is not connected 00:36:45.333 [2024-07-16 00:20:19.780441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4f190 (9): Bad file descriptor 00:36:45.333 [2024-07-16 00:20:19.781448] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:45.333 [2024-07-16 00:20:19.781467] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:45.333 [2024-07-16 00:20:19.781482] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:45.333 request: 00:36:45.333 { 00:36:45.333 "name": "nvme0", 00:36:45.333 "trtype": "tcp", 00:36:45.333 "traddr": "127.0.0.1", 00:36:45.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.333 "adrfam": "ipv4", 00:36:45.333 "trsvcid": "4420", 00:36:45.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.333 "psk": "key1", 00:36:45.333 "method": "bdev_nvme_attach_controller", 00:36:45.333 "req_id": 1 00:36:45.333 } 00:36:45.333 Got JSON-RPC error response 00:36:45.333 response: 00:36:45.333 { 00:36:45.333 "code": -5, 00:36:45.333 "message": "Input/output error" 00:36:45.333 } 00:36:45.333 00:20:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:45.333 00:20:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:45.333 00:20:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:45.333 00:20:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:45.333 00:20:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:45.333 00:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.333 00:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.333 00:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.333 00:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.333 00:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.591 00:20:20 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:45.591 00:20:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:45.591 00:20:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:45.591 00:20:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.591 00:20:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.591 00:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.591 00:20:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.849 00:20:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:45.849 00:20:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:45.849 00:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:46.107 00:20:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:46.107 00:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:46.365 00:20:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:46.365 00:20:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:46.365 00:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.623 00:20:21 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:46.623 00:20:21 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DqAqMjwyo4 00:36:46.623 00:20:21 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:46.623 00:20:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:46.623 00:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:46.882 [2024-07-16 00:20:21.270363] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DqAqMjwyo4': 0100660 00:36:46.882 [2024-07-16 00:20:21.270419] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:46.882 request: 00:36:46.882 { 00:36:46.882 "name": "key0", 00:36:46.882 "path": "/tmp/tmp.DqAqMjwyo4", 00:36:46.882 "method": "keyring_file_add_key", 00:36:46.882 "req_id": 1 00:36:46.882 } 00:36:46.882 Got JSON-RPC error response 00:36:46.882 response: 00:36:46.882 { 00:36:46.882 "code": -1, 00:36:46.882 "message": "Operation not permitted" 00:36:46.882 } 00:36:46.882 00:20:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:46.882 00:20:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:46.882 00:20:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:46.882 00:20:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:46.882 00:20:21 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DqAqMjwyo4 00:36:46.882 00:20:21 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:46.882 00:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DqAqMjwyo4 00:36:47.140 00:20:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DqAqMjwyo4 00:36:47.140 00:20:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:47.140 00:20:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.140 00:20:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.140 00:20:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.140 00:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.140 00:20:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.399 00:20:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:47.399 00:20:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:47.399 00:20:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.399 00:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:47.657 [2024-07-16 00:20:22.008328] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DqAqMjwyo4': No such file or directory 00:36:47.657 [2024-07-16 00:20:22.008367] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:47.657 [2024-07-16 00:20:22.008401] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:47.657 [2024-07-16 00:20:22.008414] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:47.657 [2024-07-16 00:20:22.008428] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:47.658 request: 00:36:47.658 { 00:36:47.658 "name": "nvme0", 00:36:47.658 "trtype": "tcp", 00:36:47.658 "traddr": "127.0.0.1", 00:36:47.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.658 "adrfam": "ipv4", 00:36:47.658 "trsvcid": "4420", 00:36:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.658 "psk": "key0", 00:36:47.658 "method": "bdev_nvme_attach_controller", 00:36:47.658 "req_id": 1 00:36:47.658 } 00:36:47.658 Got JSON-RPC error response 00:36:47.658 response: 00:36:47.658 { 00:36:47.658 "code": -19, 00:36:47.658 "message": "No such device" 00:36:47.658 } 00:36:47.658 00:20:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:47.658 00:20:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:47.658 00:20:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:47.658 00:20:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:47.658 00:20:22 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:47.658 00:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:47.916 00:20:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.k9kkfD0Yf5 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:47.916 00:20:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.k9kkfD0Yf5 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.k9kkfD0Yf5 00:36:47.916 00:20:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.k9kkfD0Yf5 00:36:47.916 00:20:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9kkfD0Yf5 00:36:47.916 00:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.k9kkfD0Yf5 00:36:48.228 00:20:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.228 00:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.486 nvme0n1 00:36:48.486 00:20:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:48.486 00:20:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:48.486 00:20:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.486 00:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.486 00:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.486 00:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.744 00:20:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:48.744 00:20:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:48.744 00:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.001 00:20:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:49.001 00:20:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:49.001 00:20:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.001 00:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.001 00:20:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.258 00:20:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:49.258 00:20:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:49.258 00:20:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:49.258 00:20:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:49.258 00:20:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.258 00:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.258 00:20:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:49.516 00:20:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:49.516 00:20:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:49.516 00:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:49.774 00:20:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:49.774 00:20:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:49.774 00:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.031 00:20:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:50.031 00:20:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.k9kkfD0Yf5 00:36:50.031 00:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.k9kkfD0Yf5 00:36:50.289 00:20:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.cCAHAO9lt8 00:36:50.289 00:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.cCAHAO9lt8 00:36:50.547 00:20:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.547 00:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.805 nvme0n1 00:36:50.805 00:20:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:50.805 00:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:51.064 00:20:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:51.064 "subsystems": [ 00:36:51.064 { 00:36:51.064 "subsystem": "keyring", 00:36:51.064 "config": [ 00:36:51.064 { 00:36:51.064 "method": "keyring_file_add_key", 00:36:51.064 "params": { 00:36:51.064 "name": "key0", 00:36:51.064 "path": "/tmp/tmp.k9kkfD0Yf5" 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "keyring_file_add_key", 00:36:51.064 "params": { 00:36:51.064 "name": "key1", 00:36:51.064 "path": "/tmp/tmp.cCAHAO9lt8" 00:36:51.064 } 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "iobuf", 00:36:51.064 "config": [ 00:36:51.064 { 00:36:51.064 "method": "iobuf_set_options", 00:36:51.064 "params": { 00:36:51.064 "small_pool_count": 8192, 00:36:51.064 "large_pool_count": 1024, 00:36:51.064 "small_bufsize": 8192, 00:36:51.064 "large_bufsize": 135168 00:36:51.064 } 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "sock", 00:36:51.064 "config": [ 00:36:51.064 { 00:36:51.064 "method": "sock_set_default_impl", 00:36:51.064 "params": { 00:36:51.064 "impl_name": "posix" 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "sock_impl_set_options", 00:36:51.064 "params": { 00:36:51.064 "impl_name": "ssl", 00:36:51.064 "recv_buf_size": 4096, 00:36:51.064 "send_buf_size": 4096, 00:36:51.064 "enable_recv_pipe": true, 00:36:51.064 "enable_quickack": false, 00:36:51.064 "enable_placement_id": 0, 00:36:51.064 "enable_zerocopy_send_server": true, 00:36:51.064 "enable_zerocopy_send_client": false, 00:36:51.064 "zerocopy_threshold": 0, 00:36:51.064 "tls_version": 0, 00:36:51.064 "enable_ktls": false 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "sock_impl_set_options", 00:36:51.064 "params": { 00:36:51.064 "impl_name": "posix", 00:36:51.064 "recv_buf_size": 2097152, 00:36:51.064 "send_buf_size": 2097152, 00:36:51.064 "enable_recv_pipe": true, 00:36:51.064 "enable_quickack": false, 00:36:51.064 "enable_placement_id": 0, 00:36:51.064 "enable_zerocopy_send_server": true, 00:36:51.064 "enable_zerocopy_send_client": false, 00:36:51.064 "zerocopy_threshold": 0, 00:36:51.064 "tls_version": 0, 00:36:51.064 "enable_ktls": false 00:36:51.064 } 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "vmd", 00:36:51.064 "config": [] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "accel", 00:36:51.064 "config": [ 00:36:51.064 { 00:36:51.064 "method": "accel_set_options", 00:36:51.064 "params": { 00:36:51.064 "small_cache_size": 128, 00:36:51.064 "large_cache_size": 16, 00:36:51.064 "task_count": 2048, 00:36:51.064 "sequence_count": 2048, 00:36:51.064 "buf_count": 2048 00:36:51.064 } 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "bdev", 00:36:51.064 "config": [ 00:36:51.064 { 00:36:51.064 "method": "bdev_set_options", 00:36:51.064 "params": { 00:36:51.064 "bdev_io_pool_size": 65535, 00:36:51.064 "bdev_io_cache_size": 256, 00:36:51.064 "bdev_auto_examine": true, 00:36:51.064 "iobuf_small_cache_size": 128, 00:36:51.064 "iobuf_large_cache_size": 16 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_raid_set_options", 00:36:51.064 "params": { 00:36:51.064 "process_window_size_kb": 1024 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_iscsi_set_options", 00:36:51.064 "params": { 00:36:51.064 "timeout_sec": 30 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_nvme_set_options", 00:36:51.064 "params": { 00:36:51.064 "action_on_timeout": "none", 00:36:51.064 "timeout_us": 0, 00:36:51.064 "timeout_admin_us": 0, 00:36:51.064 "keep_alive_timeout_ms": 10000, 00:36:51.064 "arbitration_burst": 0, 00:36:51.064 "low_priority_weight": 0, 00:36:51.064 "medium_priority_weight": 0, 00:36:51.064 "high_priority_weight": 0, 00:36:51.064 "nvme_adminq_poll_period_us": 10000, 00:36:51.064 "nvme_ioq_poll_period_us": 0, 00:36:51.064 "io_queue_requests": 512, 00:36:51.064 "delay_cmd_submit": true, 00:36:51.064 "transport_retry_count": 4, 00:36:51.064 "bdev_retry_count": 3, 00:36:51.064 "transport_ack_timeout": 0, 00:36:51.064 "ctrlr_loss_timeout_sec": 0, 00:36:51.064 "reconnect_delay_sec": 0, 00:36:51.064 "fast_io_fail_timeout_sec": 0, 00:36:51.064 "disable_auto_failback": false, 00:36:51.064 "generate_uuids": false, 00:36:51.064 "transport_tos": 0, 00:36:51.064 "nvme_error_stat": false, 00:36:51.064 "rdma_srq_size": 0, 00:36:51.064 "io_path_stat": false, 00:36:51.064 "allow_accel_sequence": false, 00:36:51.064 "rdma_max_cq_size": 0, 00:36:51.064 "rdma_cm_event_timeout_ms": 0, 00:36:51.064 "dhchap_digests": [ 00:36:51.064 "sha256", 00:36:51.064 "sha384", 00:36:51.064 "sha512" 00:36:51.064 ], 00:36:51.064 "dhchap_dhgroups": [ 00:36:51.064 "null", 00:36:51.064 "ffdhe2048", 00:36:51.064 "ffdhe3072", 00:36:51.064 "ffdhe4096", 00:36:51.064 "ffdhe6144", 00:36:51.064 "ffdhe8192" 00:36:51.064 ] 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_nvme_attach_controller", 00:36:51.064 "params": { 00:36:51.064 "name": "nvme0", 00:36:51.064 "trtype": "TCP", 00:36:51.064 "adrfam": "IPv4", 00:36:51.064 "traddr": "127.0.0.1", 00:36:51.064 "trsvcid": "4420", 00:36:51.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.064 "prchk_reftag": false, 00:36:51.064 "prchk_guard": false, 00:36:51.064 "ctrlr_loss_timeout_sec": 0, 00:36:51.064 "reconnect_delay_sec": 0, 00:36:51.064 "fast_io_fail_timeout_sec": 0, 00:36:51.064 "psk": "key0", 00:36:51.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.064 "hdgst": false, 00:36:51.064 "ddgst": false 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_nvme_set_hotplug", 00:36:51.064 "params": { 00:36:51.064 "period_us": 100000, 00:36:51.064 "enable": false 00:36:51.064 } 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "method": "bdev_wait_for_examine" 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }, 00:36:51.064 { 00:36:51.064 "subsystem": "nbd", 00:36:51.064 "config": [] 00:36:51.064 } 00:36:51.064 ] 00:36:51.064 }' 00:36:51.064 00:20:25 keyring_file -- keyring/file.sh@114 -- # killprocess 1403659 00:36:51.064 00:20:25 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1403659 ']' 00:36:51.064 00:20:25 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1403659 00:36:51.064 00:20:25 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1403659 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1403659' 00:36:51.065 killing process with pid 1403659 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@965 -- # kill 1403659 00:36:51.065 Received shutdown signal, test time was about 1.000000 seconds 00:36:51.065 00:36:51.065 Latency(us) 00:36:51.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.065 =================================================================================================================== 00:36:51.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.065 00:20:25 keyring_file -- common/autotest_common.sh@970 -- # wait 1403659 00:36:51.324 00:20:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=1404746 00:36:51.324 00:20:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1404746 /var/tmp/bperf.sock 00:36:51.324 00:20:25 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1404746 ']' 00:36:51.324 00:20:25 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.324 00:20:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:51.324 00:20:25 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:51.324 00:20:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:51.324 "subsystems": [ 00:36:51.324 { 00:36:51.324 "subsystem": "keyring", 00:36:51.324 "config": [ 00:36:51.324 { 00:36:51.324 "method": "keyring_file_add_key", 00:36:51.324 "params": { 00:36:51.324 "name": "key0", 00:36:51.324 "path": "/tmp/tmp.k9kkfD0Yf5" 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "keyring_file_add_key", 00:36:51.324 "params": { 00:36:51.324 "name": "key1", 00:36:51.324 "path": "/tmp/tmp.cCAHAO9lt8" 00:36:51.324 } 00:36:51.324 } 00:36:51.324 ] 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "subsystem": "iobuf", 00:36:51.324 "config": [ 00:36:51.324 { 00:36:51.324 "method": "iobuf_set_options", 00:36:51.324 "params": { 00:36:51.324 "small_pool_count": 8192, 00:36:51.324 "large_pool_count": 1024, 00:36:51.324 "small_bufsize": 8192, 00:36:51.324 "large_bufsize": 135168 00:36:51.324 } 00:36:51.324 } 00:36:51.324 ] 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "subsystem": "sock", 00:36:51.324 "config": [ 00:36:51.324 { 00:36:51.324 "method": "sock_set_default_impl", 00:36:51.324 "params": { 00:36:51.324 "impl_name": "posix" 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "sock_impl_set_options", 00:36:51.324 "params": { 00:36:51.324 "impl_name": "ssl", 00:36:51.324 "recv_buf_size": 4096, 00:36:51.324 "send_buf_size": 4096, 00:36:51.324 "enable_recv_pipe": true, 00:36:51.324 "enable_quickack": false, 00:36:51.324 "enable_placement_id": 0, 00:36:51.324 "enable_zerocopy_send_server": true, 00:36:51.324 "enable_zerocopy_send_client": false, 00:36:51.324 "zerocopy_threshold": 0, 00:36:51.324 "tls_version": 0, 00:36:51.324 "enable_ktls": false 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "sock_impl_set_options", 00:36:51.324 "params": { 00:36:51.324 "impl_name": "posix", 00:36:51.324 "recv_buf_size": 2097152, 00:36:51.324 "send_buf_size": 2097152, 00:36:51.324 "enable_recv_pipe": true, 00:36:51.324 "enable_quickack": false, 00:36:51.324 "enable_placement_id": 0, 00:36:51.324 "enable_zerocopy_send_server": true, 00:36:51.324 "enable_zerocopy_send_client": false, 00:36:51.324 "zerocopy_threshold": 0, 00:36:51.324 "tls_version": 0, 00:36:51.324 "enable_ktls": false 00:36:51.324 } 00:36:51.324 } 00:36:51.324 ] 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "subsystem": "vmd", 00:36:51.324 "config": [] 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "subsystem": "accel", 00:36:51.324 "config": [ 00:36:51.324 { 00:36:51.324 "method": "accel_set_options", 00:36:51.324 "params": { 00:36:51.324 "small_cache_size": 128, 00:36:51.324 "large_cache_size": 16, 00:36:51.324 "task_count": 2048, 00:36:51.324 "sequence_count": 2048, 00:36:51.324 "buf_count": 2048 00:36:51.324 } 00:36:51.324 } 00:36:51.324 ] 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "subsystem": "bdev", 00:36:51.324 "config": [ 00:36:51.324 { 00:36:51.324 "method": "bdev_set_options", 00:36:51.324 "params": { 00:36:51.324 "bdev_io_pool_size": 65535, 00:36:51.324 "bdev_io_cache_size": 256, 00:36:51.324 "bdev_auto_examine": true, 00:36:51.324 "iobuf_small_cache_size": 128, 00:36:51.324 "iobuf_large_cache_size": 16 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "bdev_raid_set_options", 00:36:51.324 "params": { 00:36:51.324 "process_window_size_kb": 1024 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "bdev_iscsi_set_options", 00:36:51.324 "params": { 00:36:51.324 "timeout_sec": 30 00:36:51.324 } 00:36:51.324 }, 00:36:51.324 { 00:36:51.324 "method": "bdev_nvme_set_options", 00:36:51.324 "params": { 00:36:51.324 "action_on_timeout": "none", 00:36:51.324 "timeout_us": 0, 00:36:51.324 "timeout_admin_us": 0, 00:36:51.324 "keep_alive_timeout_ms": 10000, 00:36:51.324 "arbitration_burst": 0, 00:36:51.324 "low_priority_weight": 0, 00:36:51.324 "medium_priority_weight": 0, 00:36:51.324 "high_priority_weight": 0, 00:36:51.324 "nvme_adminq_poll_period_us": 10000, 00:36:51.324 "nvme_ioq_poll_period_us": 0, 00:36:51.324 "io_queue_requests": 512, 00:36:51.324 "delay_cmd_submit": true, 00:36:51.324 "transport_retry_count": 4, 00:36:51.324 "bdev_retry_count": 3, 00:36:51.324 "transport_ack_timeout": 0, 00:36:51.324 "ctrlr_loss_timeout_sec": 0, 00:36:51.324 "reconnect_delay_sec": 0, 00:36:51.324 "fast_io_fail_timeout_sec": 0, 00:36:51.324 "disable_auto_failback": false, 00:36:51.324 "generate_uuids": false, 00:36:51.324 "transport_tos": 0, 00:36:51.324 "nvme_error_stat": false, 00:36:51.324 "rdma_srq_size": 0, 00:36:51.324 "io_path_stat": false, 00:36:51.324 "allow_accel_sequence": false, 00:36:51.324 "rdma_max_cq_size": 0, 00:36:51.324 "rdma_cm_event_timeout_ms": 0, 00:36:51.324 "dhchap_digests": [ 00:36:51.324 "sha256", 00:36:51.324 "sha384", 00:36:51.324 "sha512" 00:36:51.324 ], 00:36:51.324 "dhchap_dhgroups": [ 00:36:51.324 "null", 00:36:51.324 "ffdhe2048", 00:36:51.324 "ffdhe3072", 00:36:51.324 "ffdhe4096", 00:36:51.324 "ffdhe6144", 00:36:51.325 "ffdhe8192" 00:36:51.325 ] 00:36:51.325 } 00:36:51.325 }, 00:36:51.325 { 00:36:51.325 "method": "bdev_nvme_attach_controller", 00:36:51.325 "params": { 00:36:51.325 "name": "nvme0", 00:36:51.325 "trtype": "TCP", 00:36:51.325 "adrfam": "IPv4", 00:36:51.325 "traddr": "127.0.0.1", 00:36:51.325 "trsvcid": "4420", 00:36:51.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.325 "prchk_reftag": false, 00:36:51.325 "prchk_guard": false, 00:36:51.325 "ctrlr_loss_timeout_sec": 0, 00:36:51.325 "reconnect_delay_sec": 0, 00:36:51.325 "fast_io_fail_timeout_sec": 0, 00:36:51.325 "psk": "key0", 00:36:51.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.325 "hdgst": false, 00:36:51.325 "ddgst": false 00:36:51.325 } 00:36:51.325 }, 00:36:51.325 { 00:36:51.325 "method": "bdev_nvme_set_hotplug", 00:36:51.325 "params": { 00:36:51.325 "period_us": 100000, 00:36:51.325 "enable": false 00:36:51.325 } 00:36:51.325 }, 00:36:51.325 { 00:36:51.325 "method": "bdev_wait_for_examine" 00:36:51.325 } 00:36:51.325 ] 00:36:51.325 }, 00:36:51.325 { 00:36:51.325 "subsystem": "nbd", 00:36:51.325 "config": [] 00:36:51.325 } 00:36:51.325 ] 00:36:51.325 }' 00:36:51.325 00:20:25 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.325 00:20:25 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:51.325 00:20:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.325 [2024-07-16 00:20:25.668722] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:51.325 [2024-07-16 00:20:25.668824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404746 ] 00:36:51.325 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.325 [2024-07-16 00:20:25.728862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.325 [2024-07-16 00:20:25.819675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.583 [2024-07-16 00:20:25.993566] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:51.841 00:20:26 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.841 00:20:26 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.841 00:20:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:51.841 00:20:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:51.841 00:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.099 00:20:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:52.099 00:20:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:52.099 00:20:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.099 00:20:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.099 00:20:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.099 00:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.100 00:20:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.358 00:20:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:52.358 00:20:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:52.358 00:20:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:52.358 00:20:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.358 00:20:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.358 00:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.358 00:20:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.615 00:20:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:52.615 00:20:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:52.615 00:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:52.615 00:20:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:52.874 00:20:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:52.874 00:20:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:52.874 00:20:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.k9kkfD0Yf5 /tmp/tmp.cCAHAO9lt8 00:36:52.874 00:20:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1404746 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1404746 ']' 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1404746 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1404746 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1404746' 00:36:52.874 killing process with pid 1404746 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@965 -- # kill 1404746 00:36:52.874 Received shutdown signal, test time was about 1.000000 seconds 00:36:52.874 00:36:52.874 Latency(us) 00:36:52.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:52.874 =================================================================================================================== 00:36:52.874 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:52.874 00:20:27 keyring_file -- common/autotest_common.sh@970 -- # wait 1404746 00:36:53.133 00:20:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1403579 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1403579 ']' 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1403579 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@951 -- # uname 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1403579 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1403579' 00:36:53.133 killing process with pid 1403579 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@965 -- # kill 1403579 00:36:53.133 [2024-07-16 00:20:27.452728] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:53.133 00:20:27 keyring_file -- common/autotest_common.sh@970 -- # wait 1403579 00:36:53.392 00:36:53.392 real 0m13.276s 00:36:53.392 user 0m33.937s 00:36:53.392 sys 0m3.023s 00:36:53.392 00:20:27 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:53.392 00:20:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:53.392 ************************************ 00:36:53.392 END TEST keyring_file 00:36:53.392 ************************************ 00:36:53.392 00:20:27 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:53.392 00:20:27 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:53.392 00:20:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:53.392 00:20:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:53.392 00:20:27 -- common/autotest_common.sh@10 -- # set +x 00:36:53.392 ************************************ 00:36:53.392 START TEST keyring_linux 00:36:53.392 ************************************ 00:36:53.392 00:20:27 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:53.392 * Looking for test storage... 00:36:53.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.392 00:20:27 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.392 00:20:27 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.392 00:20:27 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.392 00:20:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.392 00:20:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.392 00:20:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.392 00:20:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:53.392 00:20:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:53.392 /tmp/:spdk-test:key0 00:36:53.392 00:20:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:53.392 00:20:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:53.392 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:53.393 00:20:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:53.393 00:20:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:53.650 00:20:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:53.650 00:20:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:53.650 /tmp/:spdk-test:key1 00:36:53.650 00:20:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1405020 00:36:53.650 00:20:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1405020 00:36:53.650 00:20:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1405020 ']' 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:53.650 00:20:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:53.650 [2024-07-16 00:20:27.979430] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:53.650 [2024-07-16 00:20:27.979536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405020 ] 00:36:53.650 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.650 [2024-07-16 00:20:28.039169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.650 [2024-07-16 00:20:28.126427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:53.907 [2024-07-16 00:20:28.341111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.907 null0 00:36:53.907 [2024-07-16 00:20:28.373154] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:53.907 [2024-07-16 00:20:28.373511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:53.907 47505841 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:53.907 527535545 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1405035 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1405035 /var/tmp/bperf.sock 00:36:53.907 00:20:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1405035 ']' 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:53.907 00:20:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:54.164 [2024-07-16 00:20:28.441174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:54.164 [2024-07-16 00:20:28.441275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405035 ] 00:36:54.164 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.164 [2024-07-16 00:20:28.499831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.164 [2024-07-16 00:20:28.587432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.420 00:20:28 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.420 00:20:28 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:36:54.420 00:20:28 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:54.420 00:20:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:54.677 00:20:28 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:54.677 00:20:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:54.935 00:20:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:54.935 00:20:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:55.192 [2024-07-16 00:20:29.476892] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:55.192 nvme0n1 00:36:55.192 00:20:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:55.192 00:20:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:55.192 00:20:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:55.192 00:20:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:55.192 00:20:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:55.192 00:20:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.449 00:20:29 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:55.449 00:20:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:55.449 00:20:29 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:55.449 00:20:29 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:55.449 00:20:29 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.449 00:20:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.449 00:20:29 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@25 -- # sn=47505841 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 47505841 == \4\7\5\0\5\8\4\1 ]] 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 47505841 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:55.706 00:20:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.706 Running I/O for 1 seconds... 00:36:57.076 00:36:57.076 Latency(us) 00:36:57.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.076 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:57.076 nvme0n1 : 1.01 9494.96 37.09 0.00 0.00 13375.22 4053.52 17573.36 00:36:57.076 =================================================================================================================== 00:36:57.076 Total : 9494.96 37.09 0.00 0.00 13375.22 4053.52 17573.36 00:36:57.076 0 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:57.076 00:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:57.076 00:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.076 00:20:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:57.334 00:20:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:57.334 00:20:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:57.334 00:20:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:57.334 00:20:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.334 00:20:31 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.334 00:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:57.900 [2024-07-16 00:20:32.108691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:57.900 [2024-07-16 00:20:32.108716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bd0f0 (107): Transport endpoint is not connected 00:36:57.900 [2024-07-16 00:20:32.109704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bd0f0 (9): Bad file descriptor 00:36:57.900 [2024-07-16 00:20:32.110718] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:57.900 [2024-07-16 00:20:32.110739] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:57.900 [2024-07-16 00:20:32.110754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:57.900 request: 00:36:57.900 { 00:36:57.900 "name": "nvme0", 00:36:57.900 "trtype": "tcp", 00:36:57.900 "traddr": "127.0.0.1", 00:36:57.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:57.900 "adrfam": "ipv4", 00:36:57.900 "trsvcid": "4420", 00:36:57.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.900 "psk": ":spdk-test:key1", 00:36:57.900 "method": "bdev_nvme_attach_controller", 00:36:57.900 "req_id": 1 00:36:57.900 } 00:36:57.900 Got JSON-RPC error response 00:36:57.900 response: 00:36:57.900 { 00:36:57.900 "code": -5, 00:36:57.900 "message": "Input/output error" 00:36:57.900 } 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@33 -- # sn=47505841 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 47505841 00:36:57.900 1 links removed 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@33 -- # sn=527535545 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 527535545 00:36:57.900 1 links removed 00:36:57.900 00:20:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1405035 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1405035 ']' 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1405035 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1405035 00:36:57.900 00:20:32 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1405035' 00:36:57.901 killing process with pid 1405035 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@965 -- # kill 1405035 00:36:57.901 Received shutdown signal, test time was about 1.000000 seconds 00:36:57.901 00:36:57.901 Latency(us) 00:36:57.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.901 =================================================================================================================== 00:36:57.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@970 -- # wait 1405035 00:36:57.901 00:20:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1405020 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1405020 ']' 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1405020 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1405020 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1405020' 00:36:57.901 killing process with pid 1405020 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@965 -- # kill 1405020 00:36:57.901 00:20:32 keyring_linux -- common/autotest_common.sh@970 -- # wait 1405020 00:36:58.159 00:36:58.159 real 0m4.847s 00:36:58.159 user 0m9.806s 00:36:58.159 sys 0m1.600s 00:36:58.159 00:20:32 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:58.159 00:20:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:58.159 ************************************ 00:36:58.159 END TEST keyring_linux 00:36:58.159 ************************************ 00:36:58.159 00:20:32 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:58.159 00:20:32 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:58.159 00:20:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:58.159 00:20:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:58.159 00:20:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:58.159 00:20:32 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:58.159 00:20:32 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:58.159 00:20:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:58.159 00:20:32 -- common/autotest_common.sh@10 -- # set +x 00:36:58.159 00:20:32 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:58.159 00:20:32 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:36:58.159 00:20:32 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:36:58.159 00:20:32 -- common/autotest_common.sh@10 -- # set +x 00:37:00.060 INFO: APP EXITING 00:37:00.060 INFO: killing all VMs 00:37:00.060 INFO: killing vhost app 00:37:00.060 WARN: no vhost pid file found 00:37:00.060 INFO: EXIT DONE 00:37:00.626 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:37:00.626 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:37:00.626 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:37:00.626 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:37:00.626 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:37:00.626 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:37:00.885 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:37:00.885 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:37:00.885 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:37:00.885 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:37:00.885 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:37:00.885 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:37:00.885 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:37:00.885 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:37:00.885 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:37:00.885 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:37:00.885 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:37:01.821 Cleaning 00:37:01.821 Removing: /var/run/dpdk/spdk0/config 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:01.821 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:01.821 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:01.821 Removing: /var/run/dpdk/spdk1/config 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:01.821 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:02.079 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:02.079 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:02.079 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:02.079 Removing: /var/run/dpdk/spdk2/config 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:02.079 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:02.079 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:02.079 Removing: /var/run/dpdk/spdk3/config 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:02.079 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:02.079 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:02.079 Removing: /var/run/dpdk/spdk4/config 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:02.079 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:02.079 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:02.079 Removing: /dev/shm/bdev_svc_trace.1 00:37:02.079 Removing: /dev/shm/nvmf_trace.0 00:37:02.079 Removing: /dev/shm/spdk_tgt_trace.pid1158594 00:37:02.079 Removing: /var/run/dpdk/spdk0 00:37:02.079 Removing: /var/run/dpdk/spdk1 00:37:02.079 Removing: /var/run/dpdk/spdk2 00:37:02.079 Removing: /var/run/dpdk/spdk3 00:37:02.079 Removing: /var/run/dpdk/spdk4 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1157374 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1157938 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1158594 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1158966 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1159491 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1159513 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1160071 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1160167 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1160370 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1161407 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162119 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162279 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162432 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162602 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162758 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1162888 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1163024 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1163244 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1163605 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1165632 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1165764 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1165892 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1165924 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166229 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166239 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166563 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166574 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166726 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166808 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166938 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1166948 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167343 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167465 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167624 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167760 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167789 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1167936 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168066 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168187 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168306 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168521 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168640 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168763 00:37:02.079 Removing: /var/run/dpdk/spdk_pid1168891 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169098 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169227 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169346 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169469 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169684 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169806 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1169925 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1170052 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1170258 00:37:02.337 Removing: /var/run/dpdk/spdk_pid1170394 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1170517 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1170636 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1170852 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1170919 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1171091 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1172698 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1214355 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1216331 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1222152 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1224550 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1226302 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1226595 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1232066 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1232068 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1232540 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233012 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233403 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233698 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233784 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233889 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233982 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1233991 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1234463 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1234937 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1235410 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1235701 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1235703 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1235860 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1236550 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1237087 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1241110 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1241228 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1243125 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1246515 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1248093 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1252805 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1256718 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1257595 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1258144 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1265673 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1267322 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1288932 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1291751 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1292602 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1293550 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1293651 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1293679 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1293769 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1294091 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1295043 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1295595 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1295820 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1297057 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1297327 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1297742 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1299473 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1301990 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1304527 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1322344 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1324354 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1327151 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1327872 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1328702 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1330646 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1332352 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1335493 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1335495 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1337678 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1337799 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1337946 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1338175 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1338181 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1339482 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1340418 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1341266 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1342195 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1343043 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1343974 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1346729 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1347049 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1347990 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1348613 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1351371 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1352797 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1355260 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1357781 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1363270 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1366591 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1366593 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1376308 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1376692 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1376987 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1377286 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1377756 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1378096 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1378398 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1378692 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1380582 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1380684 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1383499 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1383630 00:37:02.338 Removing: /var/run/dpdk/spdk_pid1384876 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1389190 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1389195 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1391360 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1392412 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1393425 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1393975 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1394983 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1395616 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1399600 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1399811 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1400131 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1401274 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1401564 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1401778 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1403579 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1403659 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1404746 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1405020 00:37:02.596 Removing: /var/run/dpdk/spdk_pid1405035 00:37:02.596 Clean 00:37:02.596 00:20:36 -- common/autotest_common.sh@1447 -- # return 0 00:37:02.596 00:20:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:02.596 00:20:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.596 00:20:36 -- common/autotest_common.sh@10 -- # set +x 00:37:02.596 00:20:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:02.596 00:20:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.596 00:20:36 -- common/autotest_common.sh@10 -- # set +x 00:37:02.596 00:20:36 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:02.596 00:20:36 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:02.596 00:20:36 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:02.596 00:20:37 -- spdk/autotest.sh@391 -- # hash lcov 00:37:02.596 00:20:37 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:02.596 00:20:37 -- spdk/autotest.sh@393 -- # hostname 00:37:02.596 00:20:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:02.856 geninfo: WARNING: invalid characters removed from testname! 00:37:35.003 00:21:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.003 00:21:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:38.283 00:21:12 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.573 00:21:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:44.098 00:21:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.373 00:21:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.897 00:21:24 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:49.897 00:21:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.897 00:21:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:49.897 00:21:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.897 00:21:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.897 00:21:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.897 00:21:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.897 00:21:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.897 00:21:24 -- paths/export.sh@5 -- $ export PATH 00:37:49.897 00:21:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.897 00:21:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:49.897 00:21:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:37:49.897 00:21:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721082084.XXXXXX 00:37:49.897 00:21:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721082084.TmmV51 00:37:49.897 00:21:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:37:49.897 00:21:24 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:37:49.897 00:21:24 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:49.897 00:21:24 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:49.897 00:21:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:49.897 00:21:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:49.897 00:21:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:37:49.897 00:21:24 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:37:49.897 00:21:24 -- common/autotest_common.sh@10 -- $ set +x 00:37:49.897 00:21:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:49.897 00:21:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:37:49.897 00:21:24 -- pm/common@17 -- $ local monitor 00:37:49.897 00:21:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.897 00:21:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.897 00:21:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.897 00:21:24 -- pm/common@21 -- $ date +%s 00:37:49.897 00:21:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:49.897 00:21:24 -- pm/common@21 -- $ date +%s 00:37:49.897 00:21:24 -- pm/common@25 -- $ sleep 1 00:37:49.897 00:21:24 -- pm/common@21 -- $ date +%s 00:37:49.897 00:21:24 -- pm/common@21 -- $ date +%s 00:37:49.897 00:21:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082084 00:37:49.897 00:21:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082084 00:37:49.897 00:21:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082084 00:37:49.897 00:21:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721082084 00:37:49.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082084_collect-vmstat.pm.log 00:37:49.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082084_collect-cpu-load.pm.log 00:37:49.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082084_collect-cpu-temp.pm.log 00:37:49.897 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721082084_collect-bmc-pm.bmc.pm.log 00:37:50.833 00:21:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:37:50.833 00:21:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:37:50.833 00:21:25 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:50.833 00:21:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:50.833 00:21:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:50.833 00:21:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:50.833 00:21:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:50.833 00:21:25 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:50.833 00:21:25 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:50.833 00:21:25 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:50.833 00:21:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:50.833 00:21:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:50.833 00:21:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:50.833 00:21:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:50.833 00:21:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.833 00:21:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:50.833 00:21:25 -- pm/common@44 -- $ pid=1409106 00:37:50.833 00:21:25 -- pm/common@50 -- $ kill -TERM 1409106 00:37:50.833 00:21:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.833 00:21:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:50.833 00:21:25 -- pm/common@44 -- $ pid=1409108 00:37:50.833 00:21:25 -- pm/common@50 -- $ kill -TERM 1409108 00:37:50.833 00:21:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.833 00:21:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:50.833 00:21:25 -- pm/common@44 -- $ pid=1409110 00:37:50.833 00:21:25 -- pm/common@50 -- $ kill -TERM 1409110 00:37:50.833 00:21:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:50.833 00:21:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:50.833 00:21:25 -- pm/common@44 -- $ pid=1409138 00:37:50.833 00:21:25 -- pm/common@50 -- $ sudo -E kill -TERM 1409138 00:37:50.833 + [[ -n 1060920 ]] 00:37:50.833 + sudo kill 1060920 00:37:50.844 [Pipeline] } 00:37:50.862 [Pipeline] // stage 00:37:50.867 [Pipeline] } 00:37:50.886 [Pipeline] // timeout 00:37:50.891 [Pipeline] } 00:37:50.911 [Pipeline] // catchError 00:37:50.916 [Pipeline] } 00:37:50.935 [Pipeline] // wrap 00:37:50.941 [Pipeline] } 00:37:50.953 [Pipeline] // catchError 00:37:50.962 [Pipeline] stage 00:37:50.965 [Pipeline] { (Epilogue) 00:37:50.978 [Pipeline] catchError 00:37:50.980 [Pipeline] { 00:37:50.995 [Pipeline] echo 00:37:50.997 Cleanup processes 00:37:51.003 [Pipeline] sh 00:37:51.287 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.287 1409276 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:51.287 1409324 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.301 [Pipeline] sh 00:37:51.585 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:51.585 ++ grep -v 'sudo pgrep' 00:37:51.585 ++ awk '{print $1}' 00:37:51.585 + sudo kill -9 1409276 00:37:51.597 [Pipeline] sh 00:37:51.882 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:01.893 [Pipeline] sh 00:38:02.177 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:02.178 Artifacts sizes are good 00:38:02.192 [Pipeline] archiveArtifacts 00:38:02.199 Archiving artifacts 00:38:02.432 [Pipeline] sh 00:38:02.715 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:02.729 [Pipeline] cleanWs 00:38:02.738 [WS-CLEANUP] Deleting project workspace... 00:38:02.738 [WS-CLEANUP] Deferred wipeout is used... 00:38:02.744 [WS-CLEANUP] done 00:38:02.746 [Pipeline] } 00:38:02.766 [Pipeline] // catchError 00:38:02.775 [Pipeline] sh 00:38:03.051 + logger -p user.info -t JENKINS-CI 00:38:03.060 [Pipeline] } 00:38:03.078 [Pipeline] // stage 00:38:03.085 [Pipeline] } 00:38:03.104 [Pipeline] // node 00:38:03.111 [Pipeline] End of Pipeline 00:38:03.135 Finished: SUCCESS